Oct 14 06:47:30 crc systemd[1]: Starting Kubernetes Kubelet... Oct 14 06:47:31 crc restorecon[4797]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 06:47:31 crc restorecon[4797]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 14 06:47:32 crc kubenswrapper[5058]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 06:47:32 crc kubenswrapper[5058]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 14 06:47:32 crc kubenswrapper[5058]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 06:47:32 crc kubenswrapper[5058]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 06:47:32 crc kubenswrapper[5058]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 14 06:47:32 crc kubenswrapper[5058]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.519022 5058 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.523990 5058 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524022 5058 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524033 5058 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524043 5058 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524054 5058 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524063 5058 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524071 5058 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524079 5058 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524087 5058 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524094 5058 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524101 5058 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524109 5058 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524117 5058 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524136 5058 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524146 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524156 5058 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524164 5058 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524172 5058 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524182 5058 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524192 5058 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524202 5058 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524212 5058 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524221 5058 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524230 5058 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524239 5058 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524248 5058 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524256 5058 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524264 5058 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524271 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524279 5058 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524286 5058 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524294 5058 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524302 5058 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524310 5058 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524317 5058 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524325 5058 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524332 5058 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524340 5058 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524348 5058 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524355 5058 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524363 5058 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524370 5058 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524378 5058 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524386 5058 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524394 5058 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524402 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524409 5058 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524417 5058 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524425 5058 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524432 5058 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524442 5058 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524452 5058 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524461 5058 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524471 5058 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524480 5058 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524489 5058 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524497 5058 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524505 5058 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524513 5058 feature_gate.go:330] unrecognized feature gate: Example Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524521 5058 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524528 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524535 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524543 5058 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524551 5058 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524558 5058 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524568 5058 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524575 5058 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524582 5058 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524590 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524598 5058 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.524605 5058 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524742 5058 flags.go:64] FLAG: --address="0.0.0.0" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524760 5058 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524776 5058 flags.go:64] FLAG: --anonymous-auth="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524787 5058 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524824 5058 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524834 5058 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524847 5058 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524858 5058 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524869 5058 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524878 5058 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524888 5058 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524898 5058 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524907 5058 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524916 5058 flags.go:64] FLAG: --cgroup-root="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524925 5058 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524933 5058 flags.go:64] FLAG: --client-ca-file="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524945 5058 flags.go:64] FLAG: --cloud-config="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524954 5058 flags.go:64] FLAG: --cloud-provider="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524963 5058 flags.go:64] FLAG: --cluster-dns="[]" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524973 5058 flags.go:64] FLAG: --cluster-domain="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524981 5058 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524991 5058 flags.go:64] FLAG: --config-dir="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.524999 5058 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525009 5058 flags.go:64] FLAG: --container-log-max-files="5" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525020 5058 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525029 5058 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525037 5058 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525047 5058 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525056 5058 flags.go:64] FLAG: --contention-profiling="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525065 5058 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525073 5058 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525084 5058 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525094 5058 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525105 5058 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525114 5058 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525123 5058 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525131 5058 flags.go:64] FLAG: --enable-load-reader="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525140 5058 flags.go:64] FLAG: --enable-server="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525149 5058 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525161 5058 flags.go:64] FLAG: --event-burst="100" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525170 5058 flags.go:64] FLAG: --event-qps="50" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525179 5058 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525187 5058 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525196 5058 flags.go:64] FLAG: --eviction-hard="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525212 5058 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525221 5058 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525230 5058 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525239 5058 flags.go:64] FLAG: --eviction-soft="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525248 5058 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525257 5058 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525265 5058 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525274 5058 flags.go:64] FLAG: --experimental-mounter-path="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525283 5058 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525291 5058 flags.go:64] FLAG: --fail-swap-on="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525300 5058 flags.go:64] FLAG: --feature-gates="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525311 5058 flags.go:64] FLAG: --file-check-frequency="20s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525320 5058 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525329 5058 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525338 5058 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525347 5058 flags.go:64] FLAG: --healthz-port="10248" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525356 5058 flags.go:64] FLAG: --help="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525366 5058 flags.go:64] FLAG: --hostname-override="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525374 5058 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525402 5058 flags.go:64] FLAG: --http-check-frequency="20s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525411 5058 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525420 5058 flags.go:64] FLAG: --image-credential-provider-config="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525429 5058 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525438 5058 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525448 5058 flags.go:64] FLAG: --image-service-endpoint="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525457 5058 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525467 5058 flags.go:64] FLAG: --kube-api-burst="100" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525476 5058 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525485 5058 flags.go:64] FLAG: --kube-api-qps="50" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525494 5058 flags.go:64] FLAG: --kube-reserved="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525502 5058 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525511 5058 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525520 5058 flags.go:64] FLAG: --kubelet-cgroups="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525528 5058 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525537 5058 flags.go:64] FLAG: --lock-file="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525545 5058 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525554 5058 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525564 5058 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525576 5058 flags.go:64] FLAG: --log-json-split-stream="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525585 5058 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525594 5058 flags.go:64] FLAG: --log-text-split-stream="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525603 5058 flags.go:64] FLAG: --logging-format="text" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525612 5058 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525621 5058 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525630 5058 flags.go:64] FLAG: --manifest-url="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525638 5058 flags.go:64] FLAG: --manifest-url-header="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525650 5058 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525659 5058 flags.go:64] FLAG: --max-open-files="1000000" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525670 5058 flags.go:64] FLAG: --max-pods="110" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525679 5058 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525688 5058 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525697 5058 flags.go:64] FLAG: --memory-manager-policy="None" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525707 5058 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525716 5058 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525724 5058 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525733 5058 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525752 5058 flags.go:64] FLAG: --node-status-max-images="50" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525761 5058 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525770 5058 flags.go:64] FLAG: --oom-score-adj="-999" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525779 5058 flags.go:64] FLAG: --pod-cidr="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525788 5058 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525824 5058 flags.go:64] FLAG: --pod-manifest-path="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525834 5058 flags.go:64] FLAG: --pod-max-pids="-1" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525843 5058 flags.go:64] FLAG: --pods-per-core="0" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525852 5058 flags.go:64] FLAG: --port="10250" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525861 5058 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525870 5058 flags.go:64] FLAG: --provider-id="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525879 5058 flags.go:64] FLAG: --qos-reserved="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525888 5058 flags.go:64] FLAG: --read-only-port="10255" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525897 5058 flags.go:64] FLAG: --register-node="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525906 5058 flags.go:64] FLAG: --register-schedulable="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525914 5058 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525928 5058 flags.go:64] FLAG: --registry-burst="10" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525937 5058 flags.go:64] FLAG: --registry-qps="5" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525946 5058 flags.go:64] FLAG: --reserved-cpus="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525955 5058 flags.go:64] FLAG: --reserved-memory="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525965 5058 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525974 5058 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525983 5058 flags.go:64] FLAG: --rotate-certificates="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.525992 5058 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526001 5058 flags.go:64] FLAG: --runonce="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526010 5058 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526019 5058 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526029 5058 flags.go:64] FLAG: --seccomp-default="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526039 5058 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526048 5058 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526059 5058 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526068 5058 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526077 5058 flags.go:64] FLAG: --storage-driver-password="root" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526086 5058 flags.go:64] FLAG: --storage-driver-secure="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526095 5058 flags.go:64] FLAG: --storage-driver-table="stats" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526103 5058 flags.go:64] FLAG: --storage-driver-user="root" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526113 5058 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526123 5058 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526132 5058 flags.go:64] FLAG: --system-cgroups="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526140 5058 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526155 5058 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526164 5058 flags.go:64] FLAG: --tls-cert-file="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526173 5058 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526184 5058 flags.go:64] FLAG: --tls-min-version="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526192 5058 flags.go:64] FLAG: --tls-private-key-file="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526202 5058 flags.go:64] FLAG: --topology-manager-policy="none" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526211 5058 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526219 5058 flags.go:64] FLAG: --topology-manager-scope="container" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526228 5058 flags.go:64] FLAG: --v="2" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526239 5058 flags.go:64] FLAG: --version="false" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526250 5058 flags.go:64] FLAG: --vmodule="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526261 5058 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.526271 5058 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526487 5058 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526497 5058 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526506 5058 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526515 5058 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526523 5058 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526531 5058 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526538 5058 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526546 5058 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526556 5058 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526567 5058 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526576 5058 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526585 5058 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526593 5058 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526600 5058 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526608 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526616 5058 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526624 5058 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526632 5058 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526639 5058 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526647 5058 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526655 5058 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526663 5058 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526670 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526690 5058 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526698 5058 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526707 5058 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526716 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526724 5058 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526732 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526740 5058 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526749 5058 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526757 5058 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526765 5058 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526773 5058 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526780 5058 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526790 5058 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526823 5058 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526833 5058 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526843 5058 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526851 5058 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526859 5058 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526867 5058 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526875 5058 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526883 5058 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526890 5058 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526898 5058 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526906 5058 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526914 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526922 5058 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526929 5058 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526937 5058 feature_gate.go:330] unrecognized feature gate: Example Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526945 5058 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526953 5058 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526961 5058 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526968 5058 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526980 5058 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526988 5058 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.526996 5058 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527004 5058 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527012 5058 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527020 5058 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527027 5058 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527035 5058 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527044 5058 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527051 5058 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527059 5058 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527067 5058 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527074 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527082 5058 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527090 5058 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.527097 5058 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.527121 5058 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.539834 5058 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.539897 5058 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540026 5058 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540048 5058 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540059 5058 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540068 5058 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540077 5058 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540089 5058 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540102 5058 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540111 5058 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540119 5058 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540127 5058 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540138 5058 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540149 5058 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540159 5058 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540169 5058 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540178 5058 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540186 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540197 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540208 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540218 5058 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540229 5058 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540240 5058 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540250 5058 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540260 5058 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540270 5058 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540281 5058 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540290 5058 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540298 5058 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540306 5058 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540314 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540323 5058 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540331 5058 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540338 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540346 5058 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540354 5058 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540364 5058 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540373 5058 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540386 5058 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540399 5058 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540409 5058 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540419 5058 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540429 5058 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540438 5058 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540449 5058 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540460 5058 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540471 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540483 5058 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540495 5058 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540505 5058 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540518 5058 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540530 5058 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540540 5058 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540583 5058 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540595 5058 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540605 5058 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540617 5058 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540627 5058 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540638 5058 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540648 5058 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540658 5058 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540669 5058 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540681 5058 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540691 5058 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540701 5058 feature_gate.go:330] unrecognized feature gate: Example Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540710 5058 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540720 5058 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540732 5058 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540741 5058 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540752 5058 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540762 5058 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540772 5058 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.540784 5058 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.540829 5058 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541146 5058 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541168 5058 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541181 5058 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541193 5058 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541205 5058 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541215 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541225 5058 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541235 5058 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541244 5058 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541262 5058 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541272 5058 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541282 5058 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541292 5058 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541301 5058 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541311 5058 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541321 5058 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541330 5058 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541344 5058 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541361 5058 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541373 5058 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541415 5058 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541425 5058 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541433 5058 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541441 5058 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541449 5058 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541457 5058 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541465 5058 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541473 5058 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541483 5058 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541493 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541502 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541511 5058 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541520 5058 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541528 5058 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541539 5058 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541549 5058 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541558 5058 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541567 5058 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541575 5058 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541584 5058 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541592 5058 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541600 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541609 5058 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541618 5058 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541626 5058 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541634 5058 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541642 5058 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541650 5058 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541658 5058 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541665 5058 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541673 5058 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541681 5058 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541689 5058 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541696 5058 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541704 5058 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541712 5058 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541719 5058 feature_gate.go:330] unrecognized feature gate: Example Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541727 5058 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541735 5058 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541743 5058 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541750 5058 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541758 5058 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541766 5058 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541775 5058 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541784 5058 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541824 5058 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541835 5058 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541848 5058 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541860 5058 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541871 5058 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.541884 5058 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.541897 5058 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.542215 5058 server.go:940] "Client rotation is on, will bootstrap in background" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.548770 5058 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.548956 5058 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.550674 5058 server.go:997] "Starting client certificate rotation" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.550730 5058 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.550898 5058 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-10 20:27:57.187311408 +0000 UTC Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.551026 5058 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1381h40m24.636290829s for next certificate rotation Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.581117 5058 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.584722 5058 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.610424 5058 log.go:25] "Validated CRI v1 runtime API" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.652885 5058 log.go:25] "Validated CRI v1 image API" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.656430 5058 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.663781 5058 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-14-06-39-19-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.663850 5058 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:46 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.689426 5058 manager.go:217] Machine: {Timestamp:2025-10-14 06:47:32.686646944 +0000 UTC m=+0.597730820 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0bd4897c-1e38-4562-b7ae-0d06c96681c4 BootID:df2d52db-1c59-470b-85f0-4c17f56af73f Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:46 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:db:e8:7b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:db:e8:7b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b8:de:c0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:86:06:c7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:51:cc:c8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:91:b6:2d Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:4b:98:41 Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:32:44:38 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:90:ed:5f:9c:c1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:b1:fe:1c:98:e6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.689950 5058 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.690128 5058 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.691160 5058 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.691635 5058 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.691726 5058 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.692234 5058 topology_manager.go:138] "Creating topology manager with none policy" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.692263 5058 container_manager_linux.go:303] "Creating device plugin manager" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.692930 5058 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.693022 5058 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.693537 5058 state_mem.go:36] "Initialized new in-memory state store" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.694362 5058 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.701316 5058 kubelet.go:418] "Attempting to sync node with API server" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.701393 5058 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.701420 5058 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.701437 5058 kubelet.go:324] "Adding apiserver pod source" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.701453 5058 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.706160 5058 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.707124 5058 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.714168 5058 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.715744 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.715775 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.715784 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.715809 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.715823 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.716279 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.716297 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.716986 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.717020 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.717042 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.717066 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.717083 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.717053 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.717032 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.717170 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.717201 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.718357 5058 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.719173 5058 server.go:1280] "Started kubelet" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.719313 5058 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.720322 5058 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.721241 5058 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.721377 5058 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 14 06:47:32 crc systemd[1]: Started Kubernetes Kubelet. Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.723068 5058 server.go:460] "Adding debug handlers to kubelet server" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.723239 5058 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.723286 5058 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.723481 5058 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.723501 5058 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.723523 5058 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.723546 5058 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:02:03.737259854 +0000 UTC Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.723592 5058 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1123h14m31.013671227s for next certificate rotation Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.723622 5058 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.724201 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.724334 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.724428 5058 factory.go:55] Registering systemd factory Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.724458 5058 factory.go:221] Registration of the systemd container factory successfully Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.724786 5058 factory.go:153] Registering CRI-O factory Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.724831 5058 factory.go:221] Registration of the crio container factory successfully Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.724908 5058 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.725079 5058 factory.go:103] Registering Raw factory Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.725148 5058 manager.go:1196] Started watching for new ooms in manager Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.726633 5058 manager.go:319] Starting recovery of all containers Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.727549 5058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.728375 5058 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186e48b66e3c1887 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-14 06:47:32.719122567 +0000 UTC m=+0.630206413,LastTimestamp:2025-10-14 06:47:32.719122567 +0000 UTC m=+0.630206413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741635 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741729 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741756 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741775 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741809 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741829 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741848 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741869 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741889 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741909 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741923 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741937 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741961 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.741988 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742002 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742017 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742035 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742050 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742071 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742095 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742112 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742130 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742150 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742168 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742186 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742201 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742224 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742244 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742260 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742313 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742331 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742375 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742410 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742427 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742442 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742462 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742478 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742496 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742510 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742547 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742565 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742583 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742600 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742614 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742629 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742646 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742662 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742685 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742701 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742742 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742765 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742780 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742824 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742855 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742873 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742895 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742917 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742932 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742946 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742966 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742982 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.742995 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.743017 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748567 5058 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748658 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748688 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748707 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748726 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748747 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748767 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748790 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748836 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748856 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748878 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748901 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748925 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748961 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.748983 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749005 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749027 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749046 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749068 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749088 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749110 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749133 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749156 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749178 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749201 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749223 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749243 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749264 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749284 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749308 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749327 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749345 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749394 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749417 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749439 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749460 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749479 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749499 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749521 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749542 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749563 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749585 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749635 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749665 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749689 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749712 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749736 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749762 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749789 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749911 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749940 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749964 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.749988 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750010 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750029 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750048 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750068 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750089 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750109 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750131 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750155 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750175 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750195 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750216 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750237 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750260 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750285 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750305 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750326 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750348 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750370 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750392 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750414 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750433 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750456 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750475 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750496 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750516 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750538 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750560 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750579 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750601 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750626 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750649 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750669 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750689 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750709 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750732 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750755 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750775 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750823 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750851 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750873 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750893 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750913 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750936 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750956 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.750976 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751000 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751020 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751040 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751061 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751083 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751104 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751122 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751138 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751153 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751170 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751185 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751201 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751217 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751233 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751249 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751264 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751280 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751300 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751321 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751341 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751361 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751380 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751400 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751425 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751445 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751467 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751488 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751509 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751528 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751548 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751564 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751582 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751689 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751716 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751742 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751767 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751787 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751827 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751847 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751868 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751886 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751908 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751929 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751950 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751968 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.751988 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.752008 5058 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.752024 5058 reconstruct.go:97] "Volume reconstruction finished" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.752037 5058 reconciler.go:26] "Reconciler: start to sync state" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.758722 5058 manager.go:324] Recovery completed Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.773298 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.775304 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.775351 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.775361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.776258 5058 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.776270 5058 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.776297 5058 state_mem.go:36] "Initialized new in-memory state store" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.787131 5058 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.788580 5058 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.788627 5058 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.788661 5058 kubelet.go:2335] "Starting kubelet main sync loop" Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.788713 5058 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 14 06:47:32 crc kubenswrapper[5058]: W1014 06:47:32.792585 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.792647 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.794028 5058 policy_none.go:49] "None policy: Start" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.794935 5058 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.794968 5058 state_mem.go:35] "Initializing new in-memory state store" Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.824561 5058 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.857729 5058 manager.go:334] "Starting Device Plugin manager" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.857814 5058 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.857838 5058 server.go:79] "Starting device plugin registration server" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.858425 5058 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.858449 5058 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.859308 5058 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.859438 5058 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.859457 5058 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.870896 5058 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.889641 5058 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.889786 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.891270 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.891310 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.891322 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.891457 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.891680 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.891740 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.892060 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.892086 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.892096 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.892167 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.892369 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.892431 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893364 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893380 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893397 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893408 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893386 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893530 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893769 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893890 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893907 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893931 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893949 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.893950 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.895048 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.895083 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.895094 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.895161 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.895175 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.895184 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.895290 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896090 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896126 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896320 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896363 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896390 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896672 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896720 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896781 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896815 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.896824 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.898667 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.898732 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.898759 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.929026 5058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.954990 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955056 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955089 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955119 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955151 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955227 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955287 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955314 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955339 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955382 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955415 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955445 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955750 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955843 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.955880 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.958723 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.960435 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.960488 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.960507 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:32 crc kubenswrapper[5058]: I1014 06:47:32.960548 5058 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 06:47:32 crc kubenswrapper[5058]: E1014 06:47:32.961228 5058 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057425 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057510 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057575 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057608 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057672 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057704 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057759 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057791 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057864 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057926 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.057961 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058019 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058052 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058110 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058140 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058690 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058716 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058829 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058755 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058873 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058763 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058839 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058890 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058916 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058688 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058817 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058964 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.058997 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.059025 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.059606 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.162423 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.164321 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.164489 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.164602 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.164852 5058 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 06:47:33 crc kubenswrapper[5058]: E1014 06:47:33.165503 5058 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.229271 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.239827 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.255650 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.276378 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.284321 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:33 crc kubenswrapper[5058]: W1014 06:47:33.289233 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-10e19b2ff4860b7ad4cba31b9b012639c82b51d9386ec8dd4979ec7d3d00c396 WatchSource:0}: Error finding container 10e19b2ff4860b7ad4cba31b9b012639c82b51d9386ec8dd4979ec7d3d00c396: Status 404 returned error can't find the container with id 10e19b2ff4860b7ad4cba31b9b012639c82b51d9386ec8dd4979ec7d3d00c396 Oct 14 06:47:33 crc kubenswrapper[5058]: W1014 06:47:33.291973 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-74575919f021e8c5195ce613ac19a63c8cd80a569fe6b9c3d0eb38502a9619d7 WatchSource:0}: Error finding container 74575919f021e8c5195ce613ac19a63c8cd80a569fe6b9c3d0eb38502a9619d7: Status 404 returned error can't find the container with id 74575919f021e8c5195ce613ac19a63c8cd80a569fe6b9c3d0eb38502a9619d7 Oct 14 06:47:33 crc kubenswrapper[5058]: W1014 06:47:33.306548 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-10ebec27b2dbf72f9f763780c1d1a2f475ad890c25facea490812ebfdb5cb3b2 WatchSource:0}: Error finding container 10ebec27b2dbf72f9f763780c1d1a2f475ad890c25facea490812ebfdb5cb3b2: Status 404 returned error can't find the container with id 10ebec27b2dbf72f9f763780c1d1a2f475ad890c25facea490812ebfdb5cb3b2 Oct 14 06:47:33 crc kubenswrapper[5058]: W1014 06:47:33.308195 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-132be42dee7081b5d3690b47bea67ce04864532d0a0691501340d94e6415862a WatchSource:0}: Error finding container 132be42dee7081b5d3690b47bea67ce04864532d0a0691501340d94e6415862a: Status 404 returned error can't find the container with id 132be42dee7081b5d3690b47bea67ce04864532d0a0691501340d94e6415862a Oct 14 06:47:33 crc kubenswrapper[5058]: E1014 06:47:33.331070 5058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.566239 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.568380 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.568442 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.568460 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.568501 5058 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 06:47:33 crc kubenswrapper[5058]: E1014 06:47:33.569184 5058 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.722917 5058 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:33 crc kubenswrapper[5058]: W1014 06:47:33.740407 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:33 crc kubenswrapper[5058]: E1014 06:47:33.740541 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.793856 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"74575919f021e8c5195ce613ac19a63c8cd80a569fe6b9c3d0eb38502a9619d7"} Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.795661 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"10e19b2ff4860b7ad4cba31b9b012639c82b51d9386ec8dd4979ec7d3d00c396"} Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.797711 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"132be42dee7081b5d3690b47bea67ce04864532d0a0691501340d94e6415862a"} Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.799428 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"10ebec27b2dbf72f9f763780c1d1a2f475ad890c25facea490812ebfdb5cb3b2"} Oct 14 06:47:33 crc kubenswrapper[5058]: I1014 06:47:33.800732 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"621547e58db8738ac9ce5bd73c216d95f054da5c0b1b4797f0a9fc24799981c8"} Oct 14 06:47:33 crc kubenswrapper[5058]: W1014 06:47:33.871433 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:33 crc kubenswrapper[5058]: E1014 06:47:33.871541 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:34 crc kubenswrapper[5058]: W1014 06:47:34.003646 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:34 crc kubenswrapper[5058]: E1014 06:47:34.004192 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:34 crc kubenswrapper[5058]: E1014 06:47:34.132708 5058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Oct 14 06:47:34 crc kubenswrapper[5058]: W1014 06:47:34.218574 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:34 crc kubenswrapper[5058]: E1014 06:47:34.218708 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.369571 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.372301 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.372384 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.372411 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.372456 5058 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 06:47:34 crc kubenswrapper[5058]: E1014 06:47:34.373282 5058 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.722475 5058 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.808374 5058 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0" exitCode=0 Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.808521 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0"} Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.808590 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.810758 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.810874 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.810902 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.812410 5058 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2f5fb7c244f242dd1d99b4fd2c24a96c10367ea8c432d07bde7b1569fae37c36" exitCode=0 Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.812456 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.812582 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2f5fb7c244f242dd1d99b4fd2c24a96c10367ea8c432d07bde7b1569fae37c36"} Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.813965 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.814014 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.814031 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.815767 5058 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d" exitCode=0 Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.815902 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d"} Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.816085 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.817691 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.817851 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.817889 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.827531 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461"} Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.827634 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.827634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8"} Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.827819 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8"} Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.827843 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d"} Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.829329 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.829369 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.829384 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.831024 5058 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6" exitCode=0 Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.831083 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6"} Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.831135 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.832205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.832249 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.832270 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.837988 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.839761 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.839944 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:34 crc kubenswrapper[5058]: I1014 06:47:34.839974 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:35 crc kubenswrapper[5058]: W1014 06:47:35.583025 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:35 crc kubenswrapper[5058]: E1014 06:47:35.583138 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.723204 5058 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:35 crc kubenswrapper[5058]: E1014 06:47:35.734054 5058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.837591 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6"} Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.837636 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327"} Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.837646 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5"} Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.837654 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7"} Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.840088 5058 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e" exitCode=0 Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.840144 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e"} Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.840706 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.842283 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.842345 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.842361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.845155 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.845144 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ec3190c76ec9e141af1d35f78233b91af731bd38bcb4ed6da5413e0d4171b404"} Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.846378 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.846408 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.846421 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.854395 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f"} Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.854451 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.854463 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c"} Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.854480 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66"} Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.854451 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.858056 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.858100 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.858114 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.858063 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.858195 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.858208 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.973876 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.975042 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.975082 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.975095 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:35 crc kubenswrapper[5058]: I1014 06:47:35.975120 5058 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 06:47:35 crc kubenswrapper[5058]: E1014 06:47:35.975650 5058 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Oct 14 06:47:36 crc kubenswrapper[5058]: W1014 06:47:36.206519 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Oct 14 06:47:36 crc kubenswrapper[5058]: E1014 06:47:36.206637 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.863831 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d"} Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.864005 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.865691 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.865759 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.865778 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.868218 5058 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8" exitCode=0 Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.868380 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.868551 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8"} Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.868692 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.868735 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.868746 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.869993 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.870046 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.870068 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.870966 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.870998 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.871016 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.871043 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.871097 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:36 crc kubenswrapper[5058]: I1014 06:47:36.871126 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.185148 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.185493 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.187114 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.187173 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.187197 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.878574 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3"} Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.878653 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0"} Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.878687 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351"} Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.878655 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.878791 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.880102 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.880158 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.880173 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.882453 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.882662 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.884073 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.884125 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:37 crc kubenswrapper[5058]: I1014 06:47:37.884201 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.522912 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.523182 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.525609 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.525659 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.525674 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.887355 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0"} Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.887419 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef"} Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.887550 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.888575 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.888646 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:38 crc kubenswrapper[5058]: I1014 06:47:38.888666 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.175819 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.177945 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.178011 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.178028 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.178073 5058 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.552301 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.552658 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.554650 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.554717 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.554770 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.891644 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.893211 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.893354 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.893382 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.959432 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.959704 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.961392 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.961478 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.961492 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:39 crc kubenswrapper[5058]: I1014 06:47:39.967432 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:40 crc kubenswrapper[5058]: I1014 06:47:40.846298 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:40 crc kubenswrapper[5058]: I1014 06:47:40.846625 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:40 crc kubenswrapper[5058]: I1014 06:47:40.849144 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:40 crc kubenswrapper[5058]: I1014 06:47:40.849205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:40 crc kubenswrapper[5058]: I1014 06:47:40.849225 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:40 crc kubenswrapper[5058]: I1014 06:47:40.895202 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:40 crc kubenswrapper[5058]: I1014 06:47:40.896695 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:40 crc kubenswrapper[5058]: I1014 06:47:40.896752 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:40 crc kubenswrapper[5058]: I1014 06:47:40.896764 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:41 crc kubenswrapper[5058]: I1014 06:47:41.523058 5058 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 06:47:41 crc kubenswrapper[5058]: I1014 06:47:41.523175 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 06:47:41 crc kubenswrapper[5058]: I1014 06:47:41.705672 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:41 crc kubenswrapper[5058]: I1014 06:47:41.706731 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:41 crc kubenswrapper[5058]: I1014 06:47:41.708442 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:41 crc kubenswrapper[5058]: I1014 06:47:41.708514 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:41 crc kubenswrapper[5058]: I1014 06:47:41.708533 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:42 crc kubenswrapper[5058]: I1014 06:47:42.028154 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 14 06:47:42 crc kubenswrapper[5058]: I1014 06:47:42.028448 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:42 crc kubenswrapper[5058]: I1014 06:47:42.029669 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:42 crc kubenswrapper[5058]: I1014 06:47:42.029694 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:42 crc kubenswrapper[5058]: I1014 06:47:42.029704 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:42 crc kubenswrapper[5058]: E1014 06:47:42.871114 5058 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.021487 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.021781 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.024013 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.024093 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.024106 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.029724 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.029915 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.030826 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.030859 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.030870 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.034304 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.903835 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.905225 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.905353 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:43 crc kubenswrapper[5058]: I1014 06:47:43.905373 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:46 crc kubenswrapper[5058]: I1014 06:47:46.723879 5058 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 14 06:47:46 crc kubenswrapper[5058]: W1014 06:47:46.738398 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 14 06:47:46 crc kubenswrapper[5058]: I1014 06:47:46.738531 5058 trace.go:236] Trace[1404945523]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 06:47:36.736) (total time: 10001ms): Oct 14 06:47:46 crc kubenswrapper[5058]: Trace[1404945523]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:46.738) Oct 14 06:47:46 crc kubenswrapper[5058]: Trace[1404945523]: [10.001752463s] [10.001752463s] END Oct 14 06:47:46 crc kubenswrapper[5058]: E1014 06:47:46.738566 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 14 06:47:46 crc kubenswrapper[5058]: W1014 06:47:46.836832 5058 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 14 06:47:46 crc kubenswrapper[5058]: I1014 06:47:46.836980 5058 trace.go:236] Trace[1393430452]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 06:47:36.835) (total time: 10001ms): Oct 14 06:47:46 crc kubenswrapper[5058]: Trace[1393430452]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:47:46.836) Oct 14 06:47:46 crc kubenswrapper[5058]: Trace[1393430452]: [10.001908939s] [10.001908939s] END Oct 14 06:47:46 crc kubenswrapper[5058]: E1014 06:47:46.837016 5058 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 14 06:47:47 crc kubenswrapper[5058]: I1014 06:47:47.205007 5058 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 14 06:47:47 crc kubenswrapper[5058]: I1014 06:47:47.205128 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 06:47:47 crc kubenswrapper[5058]: I1014 06:47:47.211306 5058 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 14 06:47:47 crc kubenswrapper[5058]: I1014 06:47:47.211412 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.524415 5058 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.524520 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.709762 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.710088 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.711864 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.711914 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.711926 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.714198 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.726018 5058 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.930264 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.930360 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.932043 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.932099 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:51 crc kubenswrapper[5058]: I1014 06:47:51.932118 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.059666 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.059990 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.061743 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.061888 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.061914 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.081741 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.191758 5058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.194265 5058 trace.go:236] Trace[66762416]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 06:47:38.786) (total time: 13407ms): Oct 14 06:47:52 crc kubenswrapper[5058]: Trace[66762416]: ---"Objects listed" error: 13407ms (06:47:52.194) Oct 14 06:47:52 crc kubenswrapper[5058]: Trace[66762416]: [13.407578222s] [13.407578222s] END Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.194456 5058 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.194671 5058 trace.go:236] Trace[1688469352]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 06:47:39.837) (total time: 12357ms): Oct 14 06:47:52 crc kubenswrapper[5058]: Trace[1688469352]: ---"Objects listed" error: 12357ms (06:47:52.194) Oct 14 06:47:52 crc kubenswrapper[5058]: Trace[1688469352]: [12.357351653s] [12.357351653s] END Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.194722 5058 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.195953 5058 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.197922 5058 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.212157 5058 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.234131 5058 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41230->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.234187 5058 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38096->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.234215 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41230->192.168.126.11:17697: read: connection reset by peer" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.234267 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38096->192.168.126.11:17697: read: connection reset by peer" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.234736 5058 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.234782 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.711107 5058 apiserver.go:52] "Watching apiserver" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.713358 5058 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.713699 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.714238 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.714311 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.714367 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.714401 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.714453 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.714530 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.714717 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.714775 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.714931 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.717955 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.718014 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.718432 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.718642 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.718894 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.719069 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.719267 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.719459 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.720319 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.724488 5058 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.763368 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.785511 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.801735 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.801831 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.801884 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.801956 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.802557 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803106 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.802292 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.802456 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803004 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803012 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803276 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.801974 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803351 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803419 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803553 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803852 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803904 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803948 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803967 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.803985 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804045 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804093 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804238 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804283 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804318 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804354 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804389 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804429 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804463 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804496 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804531 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804568 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804573 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804604 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804684 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804721 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804745 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804761 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804864 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804905 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804931 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804959 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.804992 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805013 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805032 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805061 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805090 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805119 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805140 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805148 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805282 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805329 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805373 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805380 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805413 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805452 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805455 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805477 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805527 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805557 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805577 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805600 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805623 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805649 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805668 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805687 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805707 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805727 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805750 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805777 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805821 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805848 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805872 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805899 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805925 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805932 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.805949 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806031 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806069 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806075 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806343 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806326 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806551 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806599 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806658 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806695 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806732 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806763 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806826 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806866 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806899 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806935 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806973 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807015 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807057 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807096 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807131 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807169 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807210 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807251 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807425 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807472 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807510 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807548 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807588 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807619 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807655 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807716 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807761 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807838 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807879 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807925 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807966 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808005 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808045 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808085 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808129 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808170 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808207 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808247 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808306 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808343 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808379 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808416 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808459 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808495 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808534 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808573 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808611 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808666 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808707 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808749 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808788 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808852 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808890 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808926 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809001 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809043 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809082 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809118 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809158 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809193 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809230 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809270 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809309 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809346 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809387 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809468 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809509 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809552 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809590 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809630 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809667 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809707 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809741 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809780 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809846 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809885 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809920 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809955 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809994 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810034 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810074 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810116 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810155 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810187 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810222 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810257 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810291 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810322 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810360 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810404 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810442 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810477 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810519 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810640 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810677 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810710 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810741 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810778 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810840 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810876 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811492 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811557 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811593 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811629 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811663 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811703 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811745 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811789 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811861 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811894 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811927 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811962 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811997 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812031 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812070 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812108 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812142 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812182 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812222 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812259 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812295 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812331 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812373 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812408 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812444 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812481 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812516 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812555 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812588 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812622 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812686 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812730 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812768 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812865 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812906 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812944 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812991 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813034 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813118 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813186 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813228 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813274 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813315 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813361 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813405 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813445 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813485 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813548 5058 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813573 5058 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813594 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813614 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813640 5058 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813661 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813683 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813704 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813726 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813747 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813773 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813817 5058 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813838 5058 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813859 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813877 5058 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813894 5058 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813912 5058 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813927 5058 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813945 5058 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.820139 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.823454 5058 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806822 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806920 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.806983 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807066 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807288 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807314 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807427 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807512 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807596 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807730 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807770 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807844 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.807898 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808069 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808136 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808204 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808200 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808637 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.808924 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809458 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809489 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809521 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809509 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.809973 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810043 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810117 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810332 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810356 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810364 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810746 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810769 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810791 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810932 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.810976 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811267 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811362 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811459 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811711 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811922 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.811978 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812224 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812173 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812312 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812324 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812436 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812736 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812750 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.812774 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813165 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813033 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813224 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813488 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813713 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813710 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.813930 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.814164 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.814240 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.814268 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.814284 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.815419 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.815538 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.815566 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.815882 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.816058 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.816138 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.816214 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.816219 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.816498 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.816556 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.816614 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.816846 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.816984 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.817693 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.817779 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.818149 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.818201 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.818552 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.818991 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.819485 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.820025 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.820606 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.821107 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.821232 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.821607 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.821650 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.822065 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.822260 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.822363 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.822693 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.822938 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.823179 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.823252 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.823319 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.823566 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.823616 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.823864 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.824148 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.824285 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.824768 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.825354 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.825562 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.825736 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.825731 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.826026 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.826069 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.826482 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.826565 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.826587 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.827047 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.827071 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.827605 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.827959 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.828032 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.828629 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.828942 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.828774 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.829060 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.829506 5058 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.834432 5058 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.834666 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:53.334636219 +0000 UTC m=+21.245720045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.835063 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:53.33504378 +0000 UTC m=+21.246127606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.835050 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.835161 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.836311 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.840977 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.841284 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.841664 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.842341 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.842592 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.843924 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.844272 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.850168 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.850184 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.850343 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.850444 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.850911 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.851206 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.851464 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.851492 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.851512 5058 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.851575 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.851593 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.851607 5058 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.852271 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.853180 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.853393 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.853565 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.856020 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.856229 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.856250 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.857009 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.862931 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.863189 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.863828 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.863982 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.864143 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.864338 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.864451 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.864617 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:53.364593431 +0000 UTC m=+21.275677237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.864997 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.865442 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.870323 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.871618 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.874010 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.874110 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:53.374086582 +0000 UTC m=+21.285170388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:52 crc kubenswrapper[5058]: E1014 06:47:52.874916 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:47:53.374909435 +0000 UTC m=+21.285993241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.876919 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.877227 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.887003 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.894206 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.898450 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.898888 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.899155 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.899427 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.899877 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.900300 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.900396 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.900362 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.900506 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.901922 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.902049 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.904922 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.906075 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.907170 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.907317 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.909775 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.910046 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.910233 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.910502 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.910664 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.911025 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.911153 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.911513 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.911840 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915474 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915521 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915608 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915625 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915637 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915651 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915662 5058 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915676 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915690 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915700 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915711 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915722 5058 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915736 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915746 5058 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915756 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915766 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915780 5058 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915811 5058 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915824 5058 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915833 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915847 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915858 5058 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915871 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915887 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915899 5058 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915911 5058 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915926 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915944 5058 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915959 5058 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915975 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.915989 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916003 5058 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916013 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916023 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916037 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916047 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916056 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916066 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916078 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916088 5058 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916097 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916107 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916119 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916130 5058 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916139 5058 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916149 5058 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916162 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916172 5058 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916183 5058 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916199 5058 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916210 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916220 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916232 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916246 5058 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916255 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916265 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916274 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916288 5058 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916297 5058 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916307 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916319 5058 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916335 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916347 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916550 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916562 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916574 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916573 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916305 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916584 5058 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916648 5058 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916667 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916680 5058 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916696 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916709 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916720 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916732 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916745 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916756 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916765 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916775 5058 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916787 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916871 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916880 5058 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916890 5058 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916903 5058 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916913 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916922 5058 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916935 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916946 5058 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916955 5058 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.916987 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917004 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917016 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917025 5058 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917036 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917047 5058 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917056 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917065 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917075 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917086 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917098 5058 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917110 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917127 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917139 5058 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917149 5058 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917158 5058 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917171 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917180 5058 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917191 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917200 5058 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917211 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917219 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917228 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917240 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917249 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917258 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917267 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917279 5058 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917288 5058 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917298 5058 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917307 5058 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917320 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917329 5058 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917341 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917350 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917361 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917370 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917381 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917392 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917401 5058 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917410 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917419 5058 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917430 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917439 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917448 5058 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917457 5058 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917468 5058 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917478 5058 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917486 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917495 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917507 5058 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917516 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917525 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917536 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917546 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917555 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917574 5058 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917585 5058 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917594 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917604 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917612 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917624 5058 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917635 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917643 5058 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917655 5058 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917666 5058 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917676 5058 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917685 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917696 5058 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917705 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917714 5058 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917724 5058 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917735 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917744 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917753 5058 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917762 5058 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917772 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917781 5058 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917789 5058 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917816 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917825 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917834 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917843 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917856 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.917866 5058 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.924074 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.933016 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.934505 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.936095 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.937761 5058 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d" exitCode=255 Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.937821 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d"} Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.946050 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.950265 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.951262 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.954825 5058 scope.go:117] "RemoveContainer" containerID="56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.957493 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.957625 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.962531 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.975028 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.986656 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:52 crc kubenswrapper[5058]: I1014 06:47:52.999366 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.008142 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.018428 5058 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.018465 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.018483 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.018493 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.021317 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.032930 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.042329 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.045125 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.052046 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 06:47:53 crc kubenswrapper[5058]: W1014 06:47:53.052561 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a49c856a20e29910ba10afddf83fd1a1ef4d87357db67bdc52d3d8150238c587 WatchSource:0}: Error finding container a49c856a20e29910ba10afddf83fd1a1ef4d87357db67bdc52d3d8150238c587: Status 404 returned error can't find the container with id a49c856a20e29910ba10afddf83fd1a1ef4d87357db67bdc52d3d8150238c587 Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.055042 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: W1014 06:47:53.058833 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-65a5d6bbf88fcdc0d7225cddb0bdb0ac7e049a93a057304264f1e8174db05b71 WatchSource:0}: Error finding container 65a5d6bbf88fcdc0d7225cddb0bdb0ac7e049a93a057304264f1e8174db05b71: Status 404 returned error can't find the container with id 65a5d6bbf88fcdc0d7225cddb0bdb0ac7e049a93a057304264f1e8174db05b71 Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.077454 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.094585 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.098304 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-54cn9"] Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.098625 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-54cn9" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.101474 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.101702 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.101753 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.113043 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.122883 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.150779 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.176532 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.190713 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.218943 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.219444 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/674976ad-c787-440f-a8ab-98ebb4fd6d3f-hosts-file\") pod \"node-resolver-54cn9\" (UID: \"674976ad-c787-440f-a8ab-98ebb4fd6d3f\") " pod="openshift-dns/node-resolver-54cn9" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.219480 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvq6q\" (UniqueName: \"kubernetes.io/projected/674976ad-c787-440f-a8ab-98ebb4fd6d3f-kube-api-access-hvq6q\") pod \"node-resolver-54cn9\" (UID: \"674976ad-c787-440f-a8ab-98ebb4fd6d3f\") " pod="openshift-dns/node-resolver-54cn9" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.243682 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.274895 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.299920 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.319933 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvq6q\" (UniqueName: \"kubernetes.io/projected/674976ad-c787-440f-a8ab-98ebb4fd6d3f-kube-api-access-hvq6q\") pod \"node-resolver-54cn9\" (UID: \"674976ad-c787-440f-a8ab-98ebb4fd6d3f\") " pod="openshift-dns/node-resolver-54cn9" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.320010 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/674976ad-c787-440f-a8ab-98ebb4fd6d3f-hosts-file\") pod \"node-resolver-54cn9\" (UID: \"674976ad-c787-440f-a8ab-98ebb4fd6d3f\") " pod="openshift-dns/node-resolver-54cn9" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.320091 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/674976ad-c787-440f-a8ab-98ebb4fd6d3f-hosts-file\") pod \"node-resolver-54cn9\" (UID: \"674976ad-c787-440f-a8ab-98ebb4fd6d3f\") " pod="openshift-dns/node-resolver-54cn9" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.350412 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvq6q\" (UniqueName: \"kubernetes.io/projected/674976ad-c787-440f-a8ab-98ebb4fd6d3f-kube-api-access-hvq6q\") pod \"node-resolver-54cn9\" (UID: \"674976ad-c787-440f-a8ab-98ebb4fd6d3f\") " pod="openshift-dns/node-resolver-54cn9" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.420372 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.420467 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.420498 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420566 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:47:54.420535738 +0000 UTC m=+22.331619544 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420628 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420649 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420651 5058 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.420647 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420667 5058 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420754 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:54.420733153 +0000 UTC m=+22.331816959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420763 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.420907 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420918 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420968 5058 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.420988 5058 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.421008 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:54.420990211 +0000 UTC m=+22.332074017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.421027 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:54.421020072 +0000 UTC m=+22.332103878 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.421046 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:54.421038202 +0000 UTC m=+22.332122008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.426861 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-54cn9" Oct 14 06:47:53 crc kubenswrapper[5058]: W1014 06:47:53.438244 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod674976ad_c787_440f_a8ab_98ebb4fd6d3f.slice/crio-32471a9dc85a01da03766d6300c10de4dde5dcc289a90789e261bf8cfcb053d1 WatchSource:0}: Error finding container 32471a9dc85a01da03766d6300c10de4dde5dcc289a90789e261bf8cfcb053d1: Status 404 returned error can't find the container with id 32471a9dc85a01da03766d6300c10de4dde5dcc289a90789e261bf8cfcb053d1 Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.474268 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q5fhs"] Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.474812 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.477317 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.477453 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.477548 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.481575 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.481818 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-csl4q"] Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.482239 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.483261 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.483608 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hhxzz"] Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.484281 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.487227 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.487261 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.489240 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.489382 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.489612 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.490723 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.490941 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.524075 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.536471 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.547088 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.559808 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.568257 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.576516 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.588475 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.604046 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.616711 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.622778 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-system-cni-dir\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.622855 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.622877 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-var-lib-cni-multus\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.622896 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-var-lib-kubelet\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.622916 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-etc-kubernetes\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.622967 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-run-k8s-cni-cncf-io\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623050 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-run-netns\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623183 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64184db4-5b6d-4aa8-b780-c9f6163af3d8-mcd-auth-proxy-config\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623261 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-multus-conf-dir\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623308 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-cnibin\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623327 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-os-release\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623343 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1288bab5-7372-4acc-963c-6232b27a7975-cni-binary-copy\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623363 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-var-lib-cni-bin\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623387 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjkrf\" (UniqueName: \"kubernetes.io/projected/ae5798c9-200b-4801-8cf2-750b1394ff5f-kube-api-access-cjkrf\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623416 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/64184db4-5b6d-4aa8-b780-c9f6163af3d8-rootfs\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623470 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-multus-cni-dir\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623497 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-hostroot\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623518 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae5798c9-200b-4801-8cf2-750b1394ff5f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623539 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-run-multus-certs\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623559 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9x48\" (UniqueName: \"kubernetes.io/projected/64184db4-5b6d-4aa8-b780-c9f6163af3d8-kube-api-access-v9x48\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623577 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-system-cni-dir\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623623 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-multus-socket-dir-parent\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623655 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-os-release\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623770 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xpzq\" (UniqueName: \"kubernetes.io/projected/1288bab5-7372-4acc-963c-6232b27a7975-kube-api-access-9xpzq\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623888 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ae5798c9-200b-4801-8cf2-750b1394ff5f-cni-binary-copy\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623919 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-cnibin\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623940 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64184db4-5b6d-4aa8-b780-c9f6163af3d8-proxy-tls\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.623999 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1288bab5-7372-4acc-963c-6232b27a7975-multus-daemon-config\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.628839 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.643527 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.661125 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.672913 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.683459 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.694509 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.704129 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.714200 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.724695 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-system-cni-dir\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.724740 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.724775 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-run-k8s-cni-cncf-io\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.724826 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-run-netns\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.724860 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-var-lib-cni-multus\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.724884 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-var-lib-kubelet\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.724906 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-etc-kubernetes\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.724898 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-system-cni-dir\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.724929 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64184db4-5b6d-4aa8-b780-c9f6163af3d8-mcd-auth-proxy-config\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725019 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-multus-conf-dir\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725050 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-cnibin\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725075 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-os-release\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725100 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1288bab5-7372-4acc-963c-6232b27a7975-cni-binary-copy\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725102 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-run-k8s-cni-cncf-io\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725144 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-var-lib-cni-bin\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725121 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-var-lib-cni-bin\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725171 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-multus-conf-dir\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725195 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjkrf\" (UniqueName: \"kubernetes.io/projected/ae5798c9-200b-4801-8cf2-750b1394ff5f-kube-api-access-cjkrf\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725226 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-multus-cni-dir\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725247 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/64184db4-5b6d-4aa8-b780-c9f6163af3d8-rootfs\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725271 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-hostroot\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725293 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae5798c9-200b-4801-8cf2-750b1394ff5f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725326 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-run-multus-certs\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725348 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9x48\" (UniqueName: \"kubernetes.io/projected/64184db4-5b6d-4aa8-b780-c9f6163af3d8-kube-api-access-v9x48\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725372 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-system-cni-dir\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725392 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-multus-socket-dir-parent\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725375 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-var-lib-kubelet\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725425 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-os-release\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725431 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/64184db4-5b6d-4aa8-b780-c9f6163af3d8-rootfs\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725457 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-run-netns\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725063 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-var-lib-cni-multus\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725270 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-os-release\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725431 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-etc-kubernetes\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725227 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-cnibin\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725519 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-host-run-multus-certs\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725578 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xpzq\" (UniqueName: \"kubernetes.io/projected/1288bab5-7372-4acc-963c-6232b27a7975-kube-api-access-9xpzq\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725593 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-os-release\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725620 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-multus-socket-dir-parent\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725665 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-system-cni-dir\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725695 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ae5798c9-200b-4801-8cf2-750b1394ff5f-cni-binary-copy\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725711 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-multus-cni-dir\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725749 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1288bab5-7372-4acc-963c-6232b27a7975-hostroot\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725810 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-cnibin\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725839 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-cnibin\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.725965 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1288bab5-7372-4acc-963c-6232b27a7975-multus-daemon-config\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.726011 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1288bab5-7372-4acc-963c-6232b27a7975-cni-binary-copy\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.726028 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64184db4-5b6d-4aa8-b780-c9f6163af3d8-proxy-tls\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.726370 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64184db4-5b6d-4aa8-b780-c9f6163af3d8-mcd-auth-proxy-config\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.726473 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ae5798c9-200b-4801-8cf2-750b1394ff5f-cni-binary-copy\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.726525 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae5798c9-200b-4801-8cf2-750b1394ff5f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.726578 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.726726 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1288bab5-7372-4acc-963c-6232b27a7975-multus-daemon-config\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.726844 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae5798c9-200b-4801-8cf2-750b1394ff5f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.730619 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64184db4-5b6d-4aa8-b780-c9f6163af3d8-proxy-tls\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.737597 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.743984 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjkrf\" (UniqueName: \"kubernetes.io/projected/ae5798c9-200b-4801-8cf2-750b1394ff5f-kube-api-access-cjkrf\") pod \"multus-additional-cni-plugins-hhxzz\" (UID: \"ae5798c9-200b-4801-8cf2-750b1394ff5f\") " pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.745931 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xpzq\" (UniqueName: \"kubernetes.io/projected/1288bab5-7372-4acc-963c-6232b27a7975-kube-api-access-9xpzq\") pod \"multus-csl4q\" (UID: \"1288bab5-7372-4acc-963c-6232b27a7975\") " pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.748488 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9x48\" (UniqueName: \"kubernetes.io/projected/64184db4-5b6d-4aa8-b780-c9f6163af3d8-kube-api-access-v9x48\") pod \"machine-config-daemon-q5fhs\" (UID: \"64184db4-5b6d-4aa8-b780-c9f6163af3d8\") " pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.750547 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.762621 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.772567 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.789995 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:47:53 crc kubenswrapper[5058]: E1014 06:47:53.790217 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.794066 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.800808 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-csl4q" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.810816 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" Oct 14 06:47:53 crc kubenswrapper[5058]: W1014 06:47:53.812510 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64184db4_5b6d_4aa8_b780_c9f6163af3d8.slice/crio-c5bd0f10840ca01861d1344f3f896aa1c3b00e6e46328a924de309134f1f6e46 WatchSource:0}: Error finding container c5bd0f10840ca01861d1344f3f896aa1c3b00e6e46328a924de309134f1f6e46: Status 404 returned error can't find the container with id c5bd0f10840ca01861d1344f3f896aa1c3b00e6e46328a924de309134f1f6e46 Oct 14 06:47:53 crc kubenswrapper[5058]: W1014 06:47:53.835974 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1288bab5_7372_4acc_963c_6232b27a7975.slice/crio-fc52c9f25ae0b2c03b0ba5ec2a8f41096a21d5f033a76987b7414cebd594b670 WatchSource:0}: Error finding container fc52c9f25ae0b2c03b0ba5ec2a8f41096a21d5f033a76987b7414cebd594b670: Status 404 returned error can't find the container with id fc52c9f25ae0b2c03b0ba5ec2a8f41096a21d5f033a76987b7414cebd594b670 Oct 14 06:47:53 crc kubenswrapper[5058]: W1014 06:47:53.836540 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae5798c9_200b_4801_8cf2_750b1394ff5f.slice/crio-987ef474bbbb51c15e9592f804b42f659e32660e2a09c873611cc42dd1add194 WatchSource:0}: Error finding container 987ef474bbbb51c15e9592f804b42f659e32660e2a09c873611cc42dd1add194: Status 404 returned error can't find the container with id 987ef474bbbb51c15e9592f804b42f659e32660e2a09c873611cc42dd1add194 Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.848011 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fw5vr"] Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.849138 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.851379 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.851468 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.853781 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.853915 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.854187 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.854389 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.854472 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.881169 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.896672 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.912995 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.932228 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.944918 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.945204 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"37c26b62943f7889517735b2c6fd756efd78bdf46f3b29a13600db548989bd7e"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.956498 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.960962 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.962326 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.962771 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.965216 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" event={"ID":"ae5798c9-200b-4801-8cf2-750b1394ff5f","Type":"ContainerStarted","Data":"987ef474bbbb51c15e9592f804b42f659e32660e2a09c873611cc42dd1add194"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.966195 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csl4q" event={"ID":"1288bab5-7372-4acc-963c-6232b27a7975","Type":"ContainerStarted","Data":"fc52c9f25ae0b2c03b0ba5ec2a8f41096a21d5f033a76987b7414cebd594b670"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.967664 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"c5bd0f10840ca01861d1344f3f896aa1c3b00e6e46328a924de309134f1f6e46"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.969209 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-54cn9" event={"ID":"674976ad-c787-440f-a8ab-98ebb4fd6d3f","Type":"ContainerStarted","Data":"65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.969242 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-54cn9" event={"ID":"674976ad-c787-440f-a8ab-98ebb4fd6d3f","Type":"ContainerStarted","Data":"32471a9dc85a01da03766d6300c10de4dde5dcc289a90789e261bf8cfcb053d1"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.972364 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.972401 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.972414 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"65a5d6bbf88fcdc0d7225cddb0bdb0ac7e049a93a057304264f1e8174db05b71"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.973433 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.975463 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.975536 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a49c856a20e29910ba10afddf83fd1a1ef4d87357db67bdc52d3d8150238c587"} Oct 14 06:47:53 crc kubenswrapper[5058]: I1014 06:47:53.988916 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.002954 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.015623 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.028113 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/58308f56-cccd-4c52-89af-c23806a4769e-ovn-node-metrics-cert\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.028205 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-ovn\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.028232 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-script-lib\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.028268 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-openvswitch\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.028823 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-systemd-units\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.028875 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-log-socket\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.028919 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-var-lib-openvswitch\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.028948 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-ovn-kubernetes\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.028965 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-bin\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029054 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029106 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlkw2\" (UniqueName: \"kubernetes.io/projected/58308f56-cccd-4c52-89af-c23806a4769e-kube-api-access-hlkw2\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029137 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-kubelet\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029161 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-systemd\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029183 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-node-log\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029210 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-netd\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029251 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-etc-openvswitch\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029276 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-config\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029318 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-slash\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029347 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-netns\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.029412 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-env-overrides\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.030029 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.065288 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.116936 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.130749 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-etc-openvswitch\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.130805 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-config\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.130853 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-slash\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.130876 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-netns\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.130924 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-env-overrides\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.130947 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/58308f56-cccd-4c52-89af-c23806a4769e-ovn-node-metrics-cert\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.130949 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-etc-openvswitch\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131035 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-ovn\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.130974 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-ovn\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131080 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-script-lib\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131100 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-openvswitch\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131159 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-systemd-units\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131179 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-log-socket\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131214 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-var-lib-openvswitch\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131242 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-ovn-kubernetes\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131262 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-bin\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131317 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131336 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlkw2\" (UniqueName: \"kubernetes.io/projected/58308f56-cccd-4c52-89af-c23806a4769e-kube-api-access-hlkw2\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131350 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-node-log\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131370 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-kubelet\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131365 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-slash\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131442 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-openvswitch\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131408 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-systemd\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131385 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-systemd\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131827 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-config\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131863 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-netns\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131886 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-node-log\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.131877 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-log-socket\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132084 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-kubelet\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132147 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-var-lib-openvswitch\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132204 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-netd\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132264 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-netd\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132290 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-ovn-kubernetes\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132320 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132341 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-bin\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132480 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-env-overrides\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132514 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-systemd-units\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.132931 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-script-lib\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.137074 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/58308f56-cccd-4c52-89af-c23806a4769e-ovn-node-metrics-cert\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.143391 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.167960 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlkw2\" (UniqueName: \"kubernetes.io/projected/58308f56-cccd-4c52-89af-c23806a4769e-kube-api-access-hlkw2\") pod \"ovnkube-node-fw5vr\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.212144 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.251098 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.252579 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:47:54 crc kubenswrapper[5058]: W1014 06:47:54.268924 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58308f56_cccd_4c52_89af_c23806a4769e.slice/crio-9d9ed2a8de264e30c7f219ba33ede9b9cd160967a556753f2f55f824f7e6c777 WatchSource:0}: Error finding container 9d9ed2a8de264e30c7f219ba33ede9b9cd160967a556753f2f55f824f7e6c777: Status 404 returned error can't find the container with id 9d9ed2a8de264e30c7f219ba33ede9b9cd160967a556753f2f55f824f7e6c777 Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.294865 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.323746 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.377955 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.408834 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.435263 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435501 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:47:56.435472882 +0000 UTC m=+24.346556728 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.435582 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.435635 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.435674 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.435719 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435726 5058 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435842 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435860 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435895 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435908 5058 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435928 5058 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435875 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:56.435858573 +0000 UTC m=+24.346942419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435872 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435996 5058 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.435978 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:56.435957116 +0000 UTC m=+24.347040912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.436025 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:56.436014368 +0000 UTC m=+24.347098174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.436042 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 06:47:56.436033868 +0000 UTC m=+24.347117664 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.472703 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.504534 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.544504 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.570135 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.615391 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.645163 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.789678 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.789897 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.789989 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:47:54 crc kubenswrapper[5058]: E1014 06:47:54.790163 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.794460 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.795039 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.796515 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.797325 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.798563 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.799180 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.799969 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.801139 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.802006 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.803085 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.803630 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.804761 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.805317 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.805895 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.807784 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.808368 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.809783 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.810320 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.810965 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.811536 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.812117 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.812698 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.813179 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.814822 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.815323 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.815939 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.817001 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.817473 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.818545 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.819037 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.820137 5058 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.820249 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.822110 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.823063 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.823472 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.825038 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.825686 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.826626 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.827324 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.828353 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.828892 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.830702 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.831634 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.832395 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.832917 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.833488 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.834174 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.835163 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.837479 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.838043 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.838576 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.839557 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.840160 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.841109 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.980420 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8" exitCode=0 Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.980499 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8"} Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.980565 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"9d9ed2a8de264e30c7f219ba33ede9b9cd160967a556753f2f55f824f7e6c777"} Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.982310 5058 generic.go:334] "Generic (PLEG): container finished" podID="ae5798c9-200b-4801-8cf2-750b1394ff5f" containerID="4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35" exitCode=0 Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.982429 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" event={"ID":"ae5798c9-200b-4801-8cf2-750b1394ff5f","Type":"ContainerDied","Data":"4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35"} Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.984047 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csl4q" event={"ID":"1288bab5-7372-4acc-963c-6232b27a7975","Type":"ContainerStarted","Data":"4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23"} Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.985980 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287"} Oct 14 06:47:54 crc kubenswrapper[5058]: I1014 06:47:54.986071 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc"} Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.004761 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:54Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.032147 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.060652 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.080889 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.098583 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.113101 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.135887 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.164619 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.180931 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.201149 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.211737 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.226455 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.238011 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.253551 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.267351 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.282436 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.320313 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.365027 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.402476 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.454720 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.501936 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.549203 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.567607 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.627377 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.655626 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.683059 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:55Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.789723 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:47:55 crc kubenswrapper[5058]: E1014 06:47:55.789853 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.994174 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.994248 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.994275 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.994295 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.994315 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.996871 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029"} Oct 14 06:47:55 crc kubenswrapper[5058]: I1014 06:47:55.999372 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" event={"ID":"ae5798c9-200b-4801-8cf2-750b1394ff5f","Type":"ContainerStarted","Data":"148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851"} Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.011917 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.034158 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.056339 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.073563 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.087891 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.106912 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.120059 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.133424 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.149030 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.163507 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.177963 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.197070 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.221433 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.253433 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.284470 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.327264 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.382208 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.407342 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.448894 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.458452 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.458657 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.458712 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.458767 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:48:00.458715902 +0000 UTC m=+28.369799748 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.458885 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.458936 5058 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.458974 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459016 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459040 5058 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459072 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:00.459039772 +0000 UTC m=+28.370123618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459116 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:00.459091593 +0000 UTC m=+28.370175439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.458976 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459135 5058 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459205 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:00.459188696 +0000 UTC m=+28.370272542 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459199 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459261 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459285 5058 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.459380 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:00.459348331 +0000 UTC m=+28.370432167 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.489942 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.525862 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.567695 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.604216 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.643624 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.687935 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.728307 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:56Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.789716 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:56 crc kubenswrapper[5058]: I1014 06:47:56.789855 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.791063 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:47:56 crc kubenswrapper[5058]: E1014 06:47:56.791192 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.007760 5058 generic.go:334] "Generic (PLEG): container finished" podID="ae5798c9-200b-4801-8cf2-750b1394ff5f" containerID="148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851" exitCode=0 Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.008472 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" event={"ID":"ae5798c9-200b-4801-8cf2-750b1394ff5f","Type":"ContainerDied","Data":"148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851"} Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.016126 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.028995 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.046426 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.073749 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.093971 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.114532 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.130281 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.148238 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.209213 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.245861 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.274763 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.292268 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.305388 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.317774 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:57Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:57 crc kubenswrapper[5058]: I1014 06:47:57.789393 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:47:57 crc kubenswrapper[5058]: E1014 06:47:57.789544 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.022524 5058 generic.go:334] "Generic (PLEG): container finished" podID="ae5798c9-200b-4801-8cf2-750b1394ff5f" containerID="6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba" exitCode=0 Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.022596 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" event={"ID":"ae5798c9-200b-4801-8cf2-750b1394ff5f","Type":"ContainerDied","Data":"6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba"} Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.045360 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.064989 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.086279 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.106031 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.126553 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.148261 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.172430 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.189908 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.214361 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.232559 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.250781 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.268955 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.286965 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.529958 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.536871 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.544370 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.556877 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.572105 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.594133 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.596147 5058 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.599225 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.599273 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.599285 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.599372 5058 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.609437 5058 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.609868 5058 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.611272 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.611351 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.611373 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.611403 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.611425 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:58Z","lastTransitionTime":"2025-10-14T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.630351 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: E1014 06:47:58.638636 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.644456 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.644548 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.644568 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.644596 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.644616 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:58Z","lastTransitionTime":"2025-10-14T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.654137 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: E1014 06:47:58.664598 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.670732 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.670789 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.670846 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.670881 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.670903 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:58Z","lastTransitionTime":"2025-10-14T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.674576 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: E1014 06:47:58.691110 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.697391 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.697472 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.697497 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.697524 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.697546 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:58Z","lastTransitionTime":"2025-10-14T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.699121 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: E1014 06:47:58.717984 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.723689 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.723737 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.723749 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.723775 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.723789 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:58Z","lastTransitionTime":"2025-10-14T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.733055 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: E1014 06:47:58.743868 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: E1014 06:47:58.744232 5058 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.746950 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.747014 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.747042 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.747075 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.747101 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:58Z","lastTransitionTime":"2025-10-14T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.809473 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:47:58 crc kubenswrapper[5058]: E1014 06:47:58.809686 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.810065 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:47:58 crc kubenswrapper[5058]: E1014 06:47:58.810308 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.811498 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.831192 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.849101 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.854167 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.854326 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.854356 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.854404 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.854429 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:58Z","lastTransitionTime":"2025-10-14T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.868431 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.888275 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.910134 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.932195 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.948562 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.958468 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.958557 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.958579 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.958611 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.958632 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:58Z","lastTransitionTime":"2025-10-14T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.968992 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:58 crc kubenswrapper[5058]: I1014 06:47:58.986186 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.005151 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.024179 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.030571 5058 generic.go:334] "Generic (PLEG): container finished" podID="ae5798c9-200b-4801-8cf2-750b1394ff5f" containerID="ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631" exitCode=0 Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.030639 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" event={"ID":"ae5798c9-200b-4801-8cf2-750b1394ff5f","Type":"ContainerDied","Data":"ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.037356 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.042562 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.062535 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.062607 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.062624 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.062653 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.062671 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.073268 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.104925 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.126269 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.158084 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.165944 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.166007 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.166027 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.166054 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.166073 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.176363 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.192309 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-j7fmm"] Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.193070 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.194401 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.196506 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.197006 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.197163 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.197535 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.213553 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be4339a8-d57c-4951-87f3-5d00a0b20c84-host\") pod \"node-ca-j7fmm\" (UID: \"be4339a8-d57c-4951-87f3-5d00a0b20c84\") " pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.213634 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdbzw\" (UniqueName: \"kubernetes.io/projected/be4339a8-d57c-4951-87f3-5d00a0b20c84-kube-api-access-zdbzw\") pod \"node-ca-j7fmm\" (UID: \"be4339a8-d57c-4951-87f3-5d00a0b20c84\") " pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.213676 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/be4339a8-d57c-4951-87f3-5d00a0b20c84-serviceca\") pod \"node-ca-j7fmm\" (UID: \"be4339a8-d57c-4951-87f3-5d00a0b20c84\") " pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.220987 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.236137 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.251733 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.269100 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.269172 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.269198 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.269235 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.269273 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.272778 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.287780 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.308723 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.315505 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdbzw\" (UniqueName: \"kubernetes.io/projected/be4339a8-d57c-4951-87f3-5d00a0b20c84-kube-api-access-zdbzw\") pod \"node-ca-j7fmm\" (UID: \"be4339a8-d57c-4951-87f3-5d00a0b20c84\") " pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.315579 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/be4339a8-d57c-4951-87f3-5d00a0b20c84-serviceca\") pod \"node-ca-j7fmm\" (UID: \"be4339a8-d57c-4951-87f3-5d00a0b20c84\") " pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.315675 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be4339a8-d57c-4951-87f3-5d00a0b20c84-host\") pod \"node-ca-j7fmm\" (UID: \"be4339a8-d57c-4951-87f3-5d00a0b20c84\") " pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.315784 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be4339a8-d57c-4951-87f3-5d00a0b20c84-host\") pod \"node-ca-j7fmm\" (UID: \"be4339a8-d57c-4951-87f3-5d00a0b20c84\") " pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.316924 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/be4339a8-d57c-4951-87f3-5d00a0b20c84-serviceca\") pod \"node-ca-j7fmm\" (UID: \"be4339a8-d57c-4951-87f3-5d00a0b20c84\") " pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.327441 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.340238 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdbzw\" (UniqueName: \"kubernetes.io/projected/be4339a8-d57c-4951-87f3-5d00a0b20c84-kube-api-access-zdbzw\") pod \"node-ca-j7fmm\" (UID: \"be4339a8-d57c-4951-87f3-5d00a0b20c84\") " pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.343324 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.363790 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.372614 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.372645 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.372656 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.372675 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.372688 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.401959 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.446998 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.475890 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.475940 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.475951 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.475973 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.475991 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.484118 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.517444 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-j7fmm" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.533328 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: W1014 06:47:59.538053 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4339a8_d57c_4951_87f3_5d00a0b20c84.slice/crio-75f5a0b025ceb8320fae1b988357db1c004bb5274d3e02b2eb870af59bc7db04 WatchSource:0}: Error finding container 75f5a0b025ceb8320fae1b988357db1c004bb5274d3e02b2eb870af59bc7db04: Status 404 returned error can't find the container with id 75f5a0b025ceb8320fae1b988357db1c004bb5274d3e02b2eb870af59bc7db04 Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.571994 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.578922 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.579012 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.579033 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.579484 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.579690 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.626540 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:47:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.683266 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.683312 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.683324 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.683345 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.683361 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.786339 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.786410 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.786429 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.786461 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.786483 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.789654 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:47:59 crc kubenswrapper[5058]: E1014 06:47:59.789922 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.889638 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.890206 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.890216 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.890239 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.890251 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.993650 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.993711 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.993762 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.993787 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:47:59 crc kubenswrapper[5058]: I1014 06:47:59.993845 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:47:59Z","lastTransitionTime":"2025-10-14T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.049595 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-j7fmm" event={"ID":"be4339a8-d57c-4951-87f3-5d00a0b20c84","Type":"ContainerStarted","Data":"e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.049672 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-j7fmm" event={"ID":"be4339a8-d57c-4951-87f3-5d00a0b20c84","Type":"ContainerStarted","Data":"75f5a0b025ceb8320fae1b988357db1c004bb5274d3e02b2eb870af59bc7db04"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.054539 5058 generic.go:334] "Generic (PLEG): container finished" podID="ae5798c9-200b-4801-8cf2-750b1394ff5f" containerID="1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611" exitCode=0 Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.054623 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" event={"ID":"ae5798c9-200b-4801-8cf2-750b1394ff5f","Type":"ContainerDied","Data":"1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.068524 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.084685 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.096695 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.096734 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.096745 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.096765 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.096779 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:00Z","lastTransitionTime":"2025-10-14T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.108531 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.121634 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.140019 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.158299 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.181278 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.199706 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.199740 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.199750 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.199997 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.200033 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:00Z","lastTransitionTime":"2025-10-14T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.201237 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.219255 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.243067 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.276410 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.293896 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.302148 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.302207 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.302228 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.302255 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.302275 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:00Z","lastTransitionTime":"2025-10-14T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.311932 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.327476 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.344107 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.365457 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.381766 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.394854 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.405153 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.405201 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.405214 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.405236 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.405251 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:00Z","lastTransitionTime":"2025-10-14T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.418171 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.438627 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.460609 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.486665 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.509043 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.509110 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.509128 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.509153 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.509172 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:00Z","lastTransitionTime":"2025-10-14T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.530100 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530355 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:48:08.530305376 +0000 UTC m=+36.441389192 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.530427 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.530475 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.530530 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.530626 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530680 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530715 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530736 5058 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530740 5058 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530756 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530828 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530855 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:08.530828521 +0000 UTC m=+36.441912357 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530935 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:08.530883442 +0000 UTC m=+36.441967278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530929 5058 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.530976 5058 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.531073 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:08.531058737 +0000 UTC m=+36.442142573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.531153 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:08.531090938 +0000 UTC m=+36.442174774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.533847 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.571629 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.606117 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.612579 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.612655 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.612679 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.612716 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.612745 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:00Z","lastTransitionTime":"2025-10-14T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.654104 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.705147 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.718057 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.718121 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.718138 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.718160 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.718175 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:00Z","lastTransitionTime":"2025-10-14T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.762495 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.791090 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.791171 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.791267 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:00 crc kubenswrapper[5058]: E1014 06:48:00.791334 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.820562 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.820603 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.820612 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.820629 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.820639 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:00Z","lastTransitionTime":"2025-10-14T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.826989 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.843465 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.923921 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.923968 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.923978 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.923999 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:00 crc kubenswrapper[5058]: I1014 06:48:00.924011 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:00Z","lastTransitionTime":"2025-10-14T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.027267 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.027837 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.027868 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.027903 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.027932 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.069755 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.070344 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.070416 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.078607 5058 generic.go:334] "Generic (PLEG): container finished" podID="ae5798c9-200b-4801-8cf2-750b1394ff5f" containerID="0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c" exitCode=0 Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.078682 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" event={"ID":"ae5798c9-200b-4801-8cf2-750b1394ff5f","Type":"ContainerDied","Data":"0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.095974 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.152668 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.152733 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.152752 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.152780 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.152823 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.155269 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.155386 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.155852 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.180358 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.213998 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.229590 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.245470 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.256933 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.256983 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.256995 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.257020 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.257032 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.261665 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.274153 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.295455 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.313059 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.326882 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.341364 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.356317 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.360542 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.360589 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.360600 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.360617 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.360628 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.374773 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.402474 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.446177 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.463097 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.463168 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.463187 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.463220 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.463243 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.485846 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.523884 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.566940 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.567003 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.567018 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.567041 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.567058 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.570870 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.606398 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.650744 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.670880 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.670963 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.670985 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.671024 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.671049 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.684835 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.731124 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.772396 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.774669 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.774741 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.774760 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.774788 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.774837 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.789941 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:01 crc kubenswrapper[5058]: E1014 06:48:01.790133 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.804638 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.848162 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.878078 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.878164 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.878190 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.878234 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.878265 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.890576 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.941404 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.982464 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.982530 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.982549 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.982583 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.982605 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:01Z","lastTransitionTime":"2025-10-14T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:01 crc kubenswrapper[5058]: I1014 06:48:01.984702 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:01Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.008537 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.092208 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.092268 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.092290 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.092333 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.092356 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:02Z","lastTransitionTime":"2025-10-14T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.093771 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" event={"ID":"ae5798c9-200b-4801-8cf2-750b1394ff5f","Type":"ContainerStarted","Data":"146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.093928 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.116349 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.136641 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.156104 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.173775 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.198993 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.199064 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.199083 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.199114 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.199136 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:02Z","lastTransitionTime":"2025-10-14T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.211199 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.282439 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.303229 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.303335 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.303360 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.303399 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.303423 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:02Z","lastTransitionTime":"2025-10-14T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.303517 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.331348 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.368584 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.408455 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.408547 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.408574 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.408616 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.408644 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:02Z","lastTransitionTime":"2025-10-14T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.409523 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.451271 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.488547 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.512219 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.512268 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.512285 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.512312 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.512331 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:02Z","lastTransitionTime":"2025-10-14T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.537921 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.580115 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.607852 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.615221 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.615268 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.615286 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.615313 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.615331 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:02Z","lastTransitionTime":"2025-10-14T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.722017 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.722076 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.722092 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.722284 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.722309 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:02Z","lastTransitionTime":"2025-10-14T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.789298 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.789390 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:02 crc kubenswrapper[5058]: E1014 06:48:02.789536 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:02 crc kubenswrapper[5058]: E1014 06:48:02.789675 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.811401 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.826636 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.826677 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.826691 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.826709 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.826727 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:02Z","lastTransitionTime":"2025-10-14T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.834705 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.863856 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.882723 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.908672 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.923052 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.929334 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.929367 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.929379 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.929399 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.929413 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:02Z","lastTransitionTime":"2025-10-14T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.940740 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.959196 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:02 crc kubenswrapper[5058]: I1014 06:48:02.971183 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:02Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.014788 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:03Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.031815 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.031875 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.031895 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.031919 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.031934 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.053672 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:03Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.085721 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:03Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.096728 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.124372 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:03Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.135350 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.135396 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.135410 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.135435 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.135453 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.162487 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:03Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.204662 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:03Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.238420 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.238465 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.238477 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.238497 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.238515 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.351636 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.351703 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.351719 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.351743 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.351758 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.454358 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.454407 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.454420 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.454440 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.454455 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.557496 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.557554 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.557565 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.557585 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.557598 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.661128 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.661174 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.661185 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.661206 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.661218 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.765429 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.765500 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.765518 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.765547 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.765565 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.789292 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:03 crc kubenswrapper[5058]: E1014 06:48:03.789489 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.868553 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.868997 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.869120 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.869295 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.869416 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.973314 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.973726 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.973921 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.974066 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:03 crc kubenswrapper[5058]: I1014 06:48:03.974205 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:03Z","lastTransitionTime":"2025-10-14T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.077719 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.077774 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.077791 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.077847 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.077868 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:04Z","lastTransitionTime":"2025-10-14T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.103508 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/0.log" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.107432 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850" exitCode=1 Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.107495 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.108848 5058 scope.go:117] "RemoveContainer" containerID="82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.129172 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.150443 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.174367 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.181914 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.181977 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.181999 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.182026 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.182044 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:04Z","lastTransitionTime":"2025-10-14T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.191411 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.215886 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.241963 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.261755 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.278487 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.285456 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.285521 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.285537 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.285558 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.285594 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:04Z","lastTransitionTime":"2025-10-14T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.293916 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.310586 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.331996 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.348719 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.369409 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:03Z\\\",\\\"message\\\":\\\"or removal\\\\nI1014 06:48:03.516671 6363 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 06:48:03.516678 6363 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 06:48:03.516705 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 06:48:03.516740 6363 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 06:48:03.516754 6363 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 06:48:03.516756 6363 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 06:48:03.516788 6363 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 06:48:03.516836 6363 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 06:48:03.516852 6363 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 06:48:03.516856 6363 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 06:48:03.516879 6363 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 06:48:03.516905 6363 factory.go:656] Stopping watch factory\\\\nI1014 06:48:03.516916 6363 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 06:48:03.516928 6363 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 06:48:03.516924 6363 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 06:48:03.516950 6363 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.389608 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.389645 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.389658 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.389678 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.389691 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:04Z","lastTransitionTime":"2025-10-14T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.389991 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.403467 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:04Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.493524 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.493625 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.493644 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.494189 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.494439 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:04Z","lastTransitionTime":"2025-10-14T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.598272 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.598317 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.598332 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.598354 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.598374 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:04Z","lastTransitionTime":"2025-10-14T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.702269 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.702360 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.702395 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.702433 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.702458 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:04Z","lastTransitionTime":"2025-10-14T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.789277 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:04 crc kubenswrapper[5058]: E1014 06:48:04.789529 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.790175 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:04 crc kubenswrapper[5058]: E1014 06:48:04.790372 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.805504 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.805780 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.805888 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.805996 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.806086 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:04Z","lastTransitionTime":"2025-10-14T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.909835 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.909888 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.909902 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.909935 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:04 crc kubenswrapper[5058]: I1014 06:48:04.909952 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:04Z","lastTransitionTime":"2025-10-14T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.012870 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.013144 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.013240 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.013380 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.013472 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.113668 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/0.log" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.119941 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.120007 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.120022 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.120042 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.120056 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.121908 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.122056 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.139832 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.161467 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.176429 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.193434 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.207526 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.223161 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.223208 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.223221 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.223241 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.223254 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.227003 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.243136 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.254618 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.267964 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.286172 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.302154 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.320093 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.325147 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.325363 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.325469 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.325562 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.325643 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.347342 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:03Z\\\",\\\"message\\\":\\\"or removal\\\\nI1014 06:48:03.516671 6363 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 06:48:03.516678 6363 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 06:48:03.516705 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 06:48:03.516740 6363 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 06:48:03.516754 6363 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 06:48:03.516756 6363 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 06:48:03.516788 6363 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 06:48:03.516836 6363 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 06:48:03.516852 6363 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 06:48:03.516856 6363 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 06:48:03.516879 6363 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 06:48:03.516905 6363 factory.go:656] Stopping watch factory\\\\nI1014 06:48:03.516916 6363 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 06:48:03.516928 6363 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 06:48:03.516924 6363 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 06:48:03.516950 6363 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.370320 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.389099 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:05Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.428386 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.428600 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.428658 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.428720 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.428773 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.532162 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.532230 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.532248 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.532278 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.532300 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.636213 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.636277 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.636296 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.636322 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.636339 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.740067 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.740142 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.740177 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.740210 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.740228 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.789337 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:05 crc kubenswrapper[5058]: E1014 06:48:05.789625 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.843636 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.843712 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.843732 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.843759 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.843783 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.947699 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.947777 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.947822 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.947850 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:05 crc kubenswrapper[5058]: I1014 06:48:05.947868 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:05Z","lastTransitionTime":"2025-10-14T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.050726 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.050842 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.050865 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.050895 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.050915 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.130014 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/1.log" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.131145 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/0.log" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.135834 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb" exitCode=1 Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.135855 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.135983 5058 scope.go:117] "RemoveContainer" containerID="82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.137153 5058 scope.go:117] "RemoveContainer" containerID="fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb" Oct 14 06:48:06 crc kubenswrapper[5058]: E1014 06:48:06.137449 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.155439 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.155496 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.155515 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.155540 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.155560 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.162326 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.202232 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.222331 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.242777 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.258115 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.258339 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.258463 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.258587 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.258735 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.267294 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.286045 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.306291 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.323763 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.346981 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.362427 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.362508 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.362528 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.362561 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.362584 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.374738 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.393451 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.421624 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.455552 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:03Z\\\",\\\"message\\\":\\\"or removal\\\\nI1014 06:48:03.516671 6363 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 06:48:03.516678 6363 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 06:48:03.516705 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 06:48:03.516740 6363 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 06:48:03.516754 6363 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 06:48:03.516756 6363 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 06:48:03.516788 6363 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 06:48:03.516836 6363 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 06:48:03.516852 6363 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 06:48:03.516856 6363 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 06:48:03.516879 6363 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 06:48:03.516905 6363 factory.go:656] Stopping watch factory\\\\nI1014 06:48:03.516916 6363 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 06:48:03.516928 6363 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 06:48:03.516924 6363 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 06:48:03.516950 6363 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.466533 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.466984 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.467118 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.467252 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.467402 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.484842 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.501156 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.571118 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.571169 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.571181 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.571199 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.571214 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.573608 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds"] Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.574450 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.578537 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.579099 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.599622 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.612553 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb601f8e-5c64-47af-ac59-4251c7ab625a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.612636 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb601f8e-5c64-47af-ac59-4251c7ab625a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.612688 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmxkr\" (UniqueName: \"kubernetes.io/projected/cb601f8e-5c64-47af-ac59-4251c7ab625a-kube-api-access-zmxkr\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.612747 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb601f8e-5c64-47af-ac59-4251c7ab625a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.643983 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:03Z\\\",\\\"message\\\":\\\"or removal\\\\nI1014 06:48:03.516671 6363 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 06:48:03.516678 6363 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 06:48:03.516705 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 06:48:03.516740 6363 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 06:48:03.516754 6363 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 06:48:03.516756 6363 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 06:48:03.516788 6363 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 06:48:03.516836 6363 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 06:48:03.516852 6363 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 06:48:03.516856 6363 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 06:48:03.516879 6363 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 06:48:03.516905 6363 factory.go:656] Stopping watch factory\\\\nI1014 06:48:03.516916 6363 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 06:48:03.516928 6363 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 06:48:03.516924 6363 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 06:48:03.516950 6363 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.663170 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.675043 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.675125 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.675144 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.675174 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.675193 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.686067 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.713898 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb601f8e-5c64-47af-ac59-4251c7ab625a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.713973 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb601f8e-5c64-47af-ac59-4251c7ab625a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.714017 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmxkr\" (UniqueName: \"kubernetes.io/projected/cb601f8e-5c64-47af-ac59-4251c7ab625a-kube-api-access-zmxkr\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.714086 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb601f8e-5c64-47af-ac59-4251c7ab625a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.715016 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb601f8e-5c64-47af-ac59-4251c7ab625a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.715074 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb601f8e-5c64-47af-ac59-4251c7ab625a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.719414 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.726635 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb601f8e-5c64-47af-ac59-4251c7ab625a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.742217 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmxkr\" (UniqueName: \"kubernetes.io/projected/cb601f8e-5c64-47af-ac59-4251c7ab625a-kube-api-access-zmxkr\") pod \"ovnkube-control-plane-749d76644c-9jpds\" (UID: \"cb601f8e-5c64-47af-ac59-4251c7ab625a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.742183 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.780884 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.780935 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.780948 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.780971 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.780986 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.784214 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.789085 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:06 crc kubenswrapper[5058]: E1014 06:48:06.789198 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.789592 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:06 crc kubenswrapper[5058]: E1014 06:48:06.789993 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.808705 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.833534 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.850483 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.867269 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.881908 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.884156 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.884448 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.884641 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.884880 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.885033 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.891865 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.894576 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: W1014 06:48:06.908396 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb601f8e_5c64_47af_ac59_4251c7ab625a.slice/crio-7c68cfb6df73660793985459de1423dc4300d3bf5c6f68bebed8e799977a6d00 WatchSource:0}: Error finding container 7c68cfb6df73660793985459de1423dc4300d3bf5c6f68bebed8e799977a6d00: Status 404 returned error can't find the container with id 7c68cfb6df73660793985459de1423dc4300d3bf5c6f68bebed8e799977a6d00 Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.913331 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.935395 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.949938 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:06Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.988459 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.988495 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.988505 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.988523 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:06 crc kubenswrapper[5058]: I1014 06:48:06.988534 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:06Z","lastTransitionTime":"2025-10-14T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.093105 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.093185 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.093204 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.093240 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.093259 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:07Z","lastTransitionTime":"2025-10-14T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.145635 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/1.log" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.153577 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" event={"ID":"cb601f8e-5c64-47af-ac59-4251c7ab625a","Type":"ContainerStarted","Data":"7c68cfb6df73660793985459de1423dc4300d3bf5c6f68bebed8e799977a6d00"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.195931 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.195979 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.195995 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.196021 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.196041 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:07Z","lastTransitionTime":"2025-10-14T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.301704 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.301777 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.301801 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.301853 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.301881 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:07Z","lastTransitionTime":"2025-10-14T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.404215 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.404266 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.404278 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.404297 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.404310 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:07Z","lastTransitionTime":"2025-10-14T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.508163 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.508246 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.508270 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.508305 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.508333 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:07Z","lastTransitionTime":"2025-10-14T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.611572 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.611636 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.611650 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.611675 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.611690 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:07Z","lastTransitionTime":"2025-10-14T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.695748 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ckdsj"] Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.696937 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:07 crc kubenswrapper[5058]: E1014 06:48:07.697088 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.716293 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.716397 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.716450 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.716487 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.716509 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:07Z","lastTransitionTime":"2025-10-14T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.722891 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.722971 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm26h\" (UniqueName: \"kubernetes.io/projected/a70e631f-95b4-451e-821b-8b9297428934-kube-api-access-hm26h\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.724394 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.743891 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.765090 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.782592 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.790263 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:07 crc kubenswrapper[5058]: E1014 06:48:07.790496 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.802230 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.817088 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.819490 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.819539 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.819558 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.819584 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.819602 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:07Z","lastTransitionTime":"2025-10-14T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.823635 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.823701 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm26h\" (UniqueName: \"kubernetes.io/projected/a70e631f-95b4-451e-821b-8b9297428934-kube-api-access-hm26h\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:07 crc kubenswrapper[5058]: E1014 06:48:07.823851 5058 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:07 crc kubenswrapper[5058]: E1014 06:48:07.823921 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs podName:a70e631f-95b4-451e-821b-8b9297428934 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:08.323900757 +0000 UTC m=+36.234984573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs") pod "network-metrics-daemon-ckdsj" (UID: "a70e631f-95b4-451e-821b-8b9297428934") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.836611 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.857429 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.865475 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm26h\" (UniqueName: \"kubernetes.io/projected/a70e631f-95b4-451e-821b-8b9297428934-kube-api-access-hm26h\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.877074 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.898723 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.923030 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.923079 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.923093 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.923119 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.923135 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:07Z","lastTransitionTime":"2025-10-14T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.932845 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:03Z\\\",\\\"message\\\":\\\"or removal\\\\nI1014 06:48:03.516671 6363 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 06:48:03.516678 6363 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 06:48:03.516705 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 06:48:03.516740 6363 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 06:48:03.516754 6363 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 06:48:03.516756 6363 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 06:48:03.516788 6363 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 06:48:03.516836 6363 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 06:48:03.516852 6363 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 06:48:03.516856 6363 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 06:48:03.516879 6363 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 06:48:03.516905 6363 factory.go:656] Stopping watch factory\\\\nI1014 06:48:03.516916 6363 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 06:48:03.516928 6363 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 06:48:03.516924 6363 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 06:48:03.516950 6363 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.953032 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.970635 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:07 crc kubenswrapper[5058]: I1014 06:48:07.991017 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:07Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.013504 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.027532 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.027581 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.027593 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.027616 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.027631 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.037457 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.072063 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.130132 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.130200 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.130218 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.130248 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.130265 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.159476 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" event={"ID":"cb601f8e-5c64-47af-ac59-4251c7ab625a","Type":"ContainerStarted","Data":"eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.159569 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" event={"ID":"cb601f8e-5c64-47af-ac59-4251c7ab625a","Type":"ContainerStarted","Data":"186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.174610 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.186872 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.211096 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:03Z\\\",\\\"message\\\":\\\"or removal\\\\nI1014 06:48:03.516671 6363 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 06:48:03.516678 6363 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 06:48:03.516705 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 06:48:03.516740 6363 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 06:48:03.516754 6363 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 06:48:03.516756 6363 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 06:48:03.516788 6363 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 06:48:03.516836 6363 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 06:48:03.516852 6363 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 06:48:03.516856 6363 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 06:48:03.516879 6363 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 06:48:03.516905 6363 factory.go:656] Stopping watch factory\\\\nI1014 06:48:03.516916 6363 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 06:48:03.516928 6363 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 06:48:03.516924 6363 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 06:48:03.516950 6363 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.225483 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.234041 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.234098 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.234114 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.234137 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.234155 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.238969 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.270320 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.284561 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.329645 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.329821 5058 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.329958 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs podName:a70e631f-95b4-451e-821b-8b9297428934 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:09.329940904 +0000 UTC m=+37.241024710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs") pod "network-metrics-daemon-ckdsj" (UID: "a70e631f-95b4-451e-821b-8b9297428934") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.332803 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.337600 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.337637 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.337645 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.337660 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.337669 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.346283 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.357320 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.375375 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.389451 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.399663 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.411455 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.423169 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.436651 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.446249 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.446471 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.446760 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.446943 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.447119 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.449988 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:08Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.530927 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.531041 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.531070 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.531091 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.531124 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.532446 5058 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.533212 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.533238 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.533317 5058 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.533466 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.533497 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.533550 5058 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.533767 5058 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.534021 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:24.532513231 +0000 UTC m=+52.443597037 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.534087 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:48:24.534070295 +0000 UTC m=+52.445154101 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.534111 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:24.534102506 +0000 UTC m=+52.445186312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.534757 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:24.534326502 +0000 UTC m=+52.445410338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.535657 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:24.535017822 +0000 UTC m=+52.446101668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.550205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.550413 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.550489 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.550564 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.550626 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.653920 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.653993 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.654017 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.654090 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.654111 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.756954 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.757035 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.757058 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.757092 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.757115 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.789331 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.789441 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.790215 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:08 crc kubenswrapper[5058]: E1014 06:48:08.790003 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.860561 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.860671 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.860732 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.860845 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.860867 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.964888 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.964971 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.964984 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.965010 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:08 crc kubenswrapper[5058]: I1014 06:48:08.965029 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:08Z","lastTransitionTime":"2025-10-14T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.066559 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.066651 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.066670 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.066698 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.066720 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.085226 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.091700 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.091770 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.091844 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.091883 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.091912 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.111167 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.118723 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.118790 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.118868 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.118902 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.118925 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.140852 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.146811 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.146923 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.146949 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.146984 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.147008 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.167582 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.173034 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.173112 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.173135 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.173165 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.173184 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.195082 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.195336 5058 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.198037 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.198110 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.198140 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.198176 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.198197 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.303167 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.303244 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.303268 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.303301 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.303325 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.340011 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.340168 5058 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.340237 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs podName:a70e631f-95b4-451e-821b-8b9297428934 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:11.340219076 +0000 UTC m=+39.251302892 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs") pod "network-metrics-daemon-ckdsj" (UID: "a70e631f-95b4-451e-821b-8b9297428934") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.406153 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.406202 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.406216 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.406238 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.406254 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.510059 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.510192 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.510214 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.510247 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.510272 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.563560 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.590734 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.610463 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.613341 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.613397 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.613420 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.613453 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.613477 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.630653 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.647087 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.668926 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.696511 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.714007 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.716846 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.716932 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.716958 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.716996 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.717026 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.735791 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.755330 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.775958 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.789629 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.789629 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.789854 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:09 crc kubenswrapper[5058]: E1014 06:48:09.789968 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.794924 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.817959 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.820510 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.820574 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.820601 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.820638 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.820664 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.840898 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.873047 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:03Z\\\",\\\"message\\\":\\\"or removal\\\\nI1014 06:48:03.516671 6363 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 06:48:03.516678 6363 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 06:48:03.516705 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 06:48:03.516740 6363 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 06:48:03.516754 6363 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 06:48:03.516756 6363 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 06:48:03.516788 6363 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 06:48:03.516836 6363 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 06:48:03.516852 6363 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 06:48:03.516856 6363 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 06:48:03.516879 6363 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 06:48:03.516905 6363 factory.go:656] Stopping watch factory\\\\nI1014 06:48:03.516916 6363 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 06:48:03.516928 6363 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 06:48:03.516924 6363 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 06:48:03.516950 6363 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.892201 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.923953 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.924035 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.924053 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.924081 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.924099 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:09Z","lastTransitionTime":"2025-10-14T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.926744 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:09 crc kubenswrapper[5058]: I1014 06:48:09.950038 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:09Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.026862 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.026939 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.026957 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.026986 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.027005 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.131553 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.131641 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.131664 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.131703 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.131727 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.234609 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.234693 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.234708 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.234734 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.234751 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.337973 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.338054 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.338078 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.338113 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.338139 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.441933 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.442006 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.442024 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.442055 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.442078 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.544877 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.544945 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.544970 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.545048 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.545079 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.648768 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.648833 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.648844 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.648864 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.648880 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.752312 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.752378 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.752396 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.752422 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.752442 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.789327 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.789349 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:10 crc kubenswrapper[5058]: E1014 06:48:10.789501 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:10 crc kubenswrapper[5058]: E1014 06:48:10.789679 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.854892 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.854940 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.854951 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.854974 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.854987 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.958363 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.958682 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.958755 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.958868 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:10 crc kubenswrapper[5058]: I1014 06:48:10.959312 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:10Z","lastTransitionTime":"2025-10-14T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.066356 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.066410 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.066420 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.066438 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.066452 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.169576 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.169678 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.169699 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.169725 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.169746 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.273065 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.273129 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.273142 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.273165 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.273181 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.357964 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:11 crc kubenswrapper[5058]: E1014 06:48:11.358181 5058 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:11 crc kubenswrapper[5058]: E1014 06:48:11.358256 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs podName:a70e631f-95b4-451e-821b-8b9297428934 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:15.358236556 +0000 UTC m=+43.269320372 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs") pod "network-metrics-daemon-ckdsj" (UID: "a70e631f-95b4-451e-821b-8b9297428934") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.376982 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.377051 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.377065 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.377088 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.377104 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.480593 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.480710 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.480729 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.480756 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.480774 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.584368 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.584421 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.584432 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.584450 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.584463 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.688396 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.688430 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.688439 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.688455 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.688464 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.789210 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.789211 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:11 crc kubenswrapper[5058]: E1014 06:48:11.789403 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:11 crc kubenswrapper[5058]: E1014 06:48:11.789594 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.791775 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.791879 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.791898 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.791920 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.791940 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.894694 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.894747 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.894767 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.894790 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.894845 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.997764 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.997875 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.997898 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.997931 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:11 crc kubenswrapper[5058]: I1014 06:48:11.997956 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:11Z","lastTransitionTime":"2025-10-14T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.100396 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.100457 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.100476 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.100502 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.100521 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:12Z","lastTransitionTime":"2025-10-14T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.203471 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.203544 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.203561 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.203591 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.203611 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:12Z","lastTransitionTime":"2025-10-14T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.307063 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.307140 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.307161 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.307188 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.307207 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:12Z","lastTransitionTime":"2025-10-14T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.411135 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.411200 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.411224 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.411258 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.411282 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:12Z","lastTransitionTime":"2025-10-14T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.514939 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.515000 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.515017 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.515043 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.515061 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:12Z","lastTransitionTime":"2025-10-14T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.618153 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.618231 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.618254 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.618281 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.618299 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:12Z","lastTransitionTime":"2025-10-14T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.720929 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.720972 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.720983 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.721001 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.721013 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:12Z","lastTransitionTime":"2025-10-14T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.789461 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:12 crc kubenswrapper[5058]: E1014 06:48:12.789577 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.789584 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:12 crc kubenswrapper[5058]: E1014 06:48:12.789640 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.807832 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:12Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.824298 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.824353 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.824365 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.824387 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.824404 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:12Z","lastTransitionTime":"2025-10-14T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.827889 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:12Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.857790 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82c0afc4c49447ae37f70c9faef8d94779d636bea6da4398aa4fcf58dbc6c850\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:03Z\\\",\\\"message\\\":\\\"or removal\\\\nI1014 06:48:03.516671 6363 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1014 06:48:03.516678 6363 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1014 06:48:03.516705 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 06:48:03.516740 6363 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 06:48:03.516754 6363 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1014 06:48:03.516756 6363 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1014 06:48:03.516788 6363 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 06:48:03.516836 6363 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1014 06:48:03.516852 6363 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1014 06:48:03.516856 6363 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1014 06:48:03.516879 6363 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1014 06:48:03.516905 6363 factory.go:656] Stopping watch factory\\\\nI1014 06:48:03.516916 6363 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 06:48:03.516928 6363 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1014 06:48:03.516924 6363 handler.go:208] Removed *v1.Node event handler 2\\\\nI1014 06:48:03.516950 6363 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:12Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.874427 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:12Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.897439 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:12Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.931050 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.931123 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.931140 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.931169 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.931188 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:12Z","lastTransitionTime":"2025-10-14T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.932642 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:12Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.957010 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:12Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:12 crc kubenswrapper[5058]: I1014 06:48:12.978218 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:12Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.000490 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:12Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.022651 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.035176 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.035208 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.035219 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.035264 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.035280 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.041782 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.062264 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.081641 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.099001 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.114582 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.133013 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.138522 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.138605 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.138628 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.138657 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.138679 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.155652 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.242910 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.242984 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.243002 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.243034 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.243054 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.346732 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.346880 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.346905 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.346945 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.346971 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.450982 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.451048 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.451067 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.451111 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.451133 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.524064 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.525652 5058 scope.go:117] "RemoveContainer" containerID="fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb" Oct 14 06:48:13 crc kubenswrapper[5058]: E1014 06:48:13.526108 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.546072 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.561870 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.561939 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.561960 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.561990 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.562011 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.573230 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.607129 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.635889 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.652028 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.665026 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.665063 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.665081 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.665105 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.665122 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.687219 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.705706 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.727779 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.749161 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.768861 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.769114 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.769280 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.769456 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.769616 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.771674 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.789220 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.789312 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.789312 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:13 crc kubenswrapper[5058]: E1014 06:48:13.789984 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:13 crc kubenswrapper[5058]: E1014 06:48:13.790090 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.811377 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.837447 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.853720 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.873097 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.873149 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.873167 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.873194 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.873214 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.876176 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.896576 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.913387 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:13Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.977991 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.978056 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.978078 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.978105 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:13 crc kubenswrapper[5058]: I1014 06:48:13.978123 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:13Z","lastTransitionTime":"2025-10-14T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.081877 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.082227 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.082318 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.082407 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.082489 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:14Z","lastTransitionTime":"2025-10-14T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.185205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.185263 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.185280 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.185305 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.185323 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:14Z","lastTransitionTime":"2025-10-14T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.288617 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.288679 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.288701 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.288729 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.288748 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:14Z","lastTransitionTime":"2025-10-14T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.391845 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.391903 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.391921 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.391950 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.391973 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:14Z","lastTransitionTime":"2025-10-14T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.495562 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.495665 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.495684 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.495721 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.495744 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:14Z","lastTransitionTime":"2025-10-14T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.599264 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.599331 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.599357 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.599396 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.599425 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:14Z","lastTransitionTime":"2025-10-14T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.702718 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.702816 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.702839 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.702869 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.702885 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:14Z","lastTransitionTime":"2025-10-14T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.789151 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.789200 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:14 crc kubenswrapper[5058]: E1014 06:48:14.789417 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:14 crc kubenswrapper[5058]: E1014 06:48:14.789599 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.806079 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.806159 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.806187 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.806219 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.806244 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:14Z","lastTransitionTime":"2025-10-14T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.908894 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.908959 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.908976 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.909003 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:14 crc kubenswrapper[5058]: I1014 06:48:14.909021 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:14Z","lastTransitionTime":"2025-10-14T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.012408 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.012458 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.012474 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.012500 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.012517 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.116373 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.116448 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.116469 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.116498 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.116517 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.219415 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.219490 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.219510 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.219539 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.219594 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.323137 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.323200 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.323218 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.323246 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.323273 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.408672 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:15 crc kubenswrapper[5058]: E1014 06:48:15.409041 5058 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:15 crc kubenswrapper[5058]: E1014 06:48:15.409199 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs podName:a70e631f-95b4-451e-821b-8b9297428934 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:23.409161813 +0000 UTC m=+51.320245649 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs") pod "network-metrics-daemon-ckdsj" (UID: "a70e631f-95b4-451e-821b-8b9297428934") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.425456 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.425529 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.425549 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.425575 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.425592 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.528555 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.528652 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.528680 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.528715 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.528742 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.631616 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.631684 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.631700 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.631728 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.631747 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.735205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.735279 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.735303 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.735331 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.735351 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.788942 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.788979 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:15 crc kubenswrapper[5058]: E1014 06:48:15.789152 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:15 crc kubenswrapper[5058]: E1014 06:48:15.789330 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.838277 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.838350 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.838370 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.838398 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.838419 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.941588 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.941663 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.941688 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.941717 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:15 crc kubenswrapper[5058]: I1014 06:48:15.941738 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:15Z","lastTransitionTime":"2025-10-14T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.045243 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.045379 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.045400 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.045433 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.045455 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.149536 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.149619 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.149639 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.149668 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.149690 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.252850 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.252918 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.252935 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.252959 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.252980 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.355706 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.355777 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.355833 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.355900 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.355925 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.459891 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.459940 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.459955 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.459979 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.459999 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.563685 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.563749 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.563765 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.563793 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.563875 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.666858 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.666923 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.666957 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.666984 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.667002 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.771190 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.771268 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.771287 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.771315 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.771334 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.790057 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.790145 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:16 crc kubenswrapper[5058]: E1014 06:48:16.790317 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:16 crc kubenswrapper[5058]: E1014 06:48:16.790532 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.875411 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.875499 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.875522 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.875554 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.875572 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.979392 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.979907 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.980097 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.980262 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:16 crc kubenswrapper[5058]: I1014 06:48:16.980396 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:16Z","lastTransitionTime":"2025-10-14T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.084532 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.084590 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.084608 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.084636 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.084655 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:17Z","lastTransitionTime":"2025-10-14T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.188627 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.188701 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.188725 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.188761 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.188784 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:17Z","lastTransitionTime":"2025-10-14T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.295592 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.295738 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.295849 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.295907 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.295951 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:17Z","lastTransitionTime":"2025-10-14T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.400315 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.400424 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.400452 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.400530 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.400558 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:17Z","lastTransitionTime":"2025-10-14T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.504629 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.505139 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.505157 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.505185 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.505204 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:17Z","lastTransitionTime":"2025-10-14T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.608722 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.608824 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.608843 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.608871 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.608893 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:17Z","lastTransitionTime":"2025-10-14T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.712369 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.712431 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.712450 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.712476 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.712499 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:17Z","lastTransitionTime":"2025-10-14T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.789473 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.789570 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:17 crc kubenswrapper[5058]: E1014 06:48:17.789782 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:17 crc kubenswrapper[5058]: E1014 06:48:17.790035 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.816749 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.816831 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.816846 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.816870 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.816886 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:17Z","lastTransitionTime":"2025-10-14T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.921096 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.921174 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.921193 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.921226 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:17 crc kubenswrapper[5058]: I1014 06:48:17.921259 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:17Z","lastTransitionTime":"2025-10-14T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.025295 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.025386 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.025414 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.025453 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.025475 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.128763 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.128892 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.128917 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.128944 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.128962 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.232044 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.232132 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.232153 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.232183 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.232203 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.335842 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.335922 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.335950 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.335982 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.336005 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.439329 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.439381 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.439412 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.439450 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.439474 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.542911 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.542993 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.543017 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.543050 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.543072 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.646488 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.646574 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.646602 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.646640 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.646663 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.751401 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.751471 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.751491 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.751520 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.751540 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.789554 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.789549 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:18 crc kubenswrapper[5058]: E1014 06:48:18.789756 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:18 crc kubenswrapper[5058]: E1014 06:48:18.789961 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.854385 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.854441 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.854455 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.854474 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.854488 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.957464 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.957523 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.957535 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.957559 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:18 crc kubenswrapper[5058]: I1014 06:48:18.957574 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:18Z","lastTransitionTime":"2025-10-14T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.061047 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.061192 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.061283 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.061361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.061384 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.164782 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.164898 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.164916 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.164944 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.164963 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.268267 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.268335 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.268353 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.268383 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.268402 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.282287 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.282347 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.282365 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.282391 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.282407 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: E1014 06:48:19.303471 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:19Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.310034 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.310187 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.310208 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.310232 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.310250 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: E1014 06:48:19.333012 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:19Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.337892 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.337956 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.337980 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.338013 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.338038 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: E1014 06:48:19.369225 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:19Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.375068 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.375133 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.375151 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.375182 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.375202 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: E1014 06:48:19.396975 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:19Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.403115 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.403203 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.403222 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.403252 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.403272 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: E1014 06:48:19.431840 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:19Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:19 crc kubenswrapper[5058]: E1014 06:48:19.432035 5058 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.434081 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.434195 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.434218 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.434305 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.434327 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.538140 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.538203 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.538220 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.538245 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.538264 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.642071 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.642166 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.642187 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.642228 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.642254 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.746770 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.746891 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.746910 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.746939 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.746958 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.789734 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.789917 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:19 crc kubenswrapper[5058]: E1014 06:48:19.790032 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:19 crc kubenswrapper[5058]: E1014 06:48:19.790167 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.850340 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.850435 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.850459 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.850501 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.850529 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.955262 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.955345 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.955365 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.955396 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:19 crc kubenswrapper[5058]: I1014 06:48:19.955416 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:19Z","lastTransitionTime":"2025-10-14T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.059339 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.059426 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.059453 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.059491 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.059519 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.163327 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.163395 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.163413 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.163438 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.163457 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.266953 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.267112 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.267138 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.267165 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.267184 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.371414 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.371494 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.371519 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.371551 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.371578 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.475486 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.475570 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.475592 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.475622 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.475644 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.579147 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.579217 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.579238 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.579268 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.579288 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.682200 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.682281 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.682299 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.682329 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.682351 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.785442 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.785491 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.785502 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.785522 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.785537 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.789709 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.789806 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:20 crc kubenswrapper[5058]: E1014 06:48:20.789872 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:20 crc kubenswrapper[5058]: E1014 06:48:20.789961 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.888137 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.888205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.888226 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.888254 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.888272 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.991644 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.991749 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.991828 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.991863 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:20 crc kubenswrapper[5058]: I1014 06:48:20.991887 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:20Z","lastTransitionTime":"2025-10-14T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.095988 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.096070 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.096093 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.096125 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.096149 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:21Z","lastTransitionTime":"2025-10-14T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.199291 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.199361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.199377 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.199403 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.199422 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:21Z","lastTransitionTime":"2025-10-14T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.302312 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.302390 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.302409 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.302439 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.302459 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:21Z","lastTransitionTime":"2025-10-14T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.406682 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.406738 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.406748 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.406771 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.406783 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:21Z","lastTransitionTime":"2025-10-14T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.510772 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.510876 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.510894 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.510920 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.510941 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:21Z","lastTransitionTime":"2025-10-14T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.615638 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.615727 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.615751 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.615783 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.615869 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:21Z","lastTransitionTime":"2025-10-14T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.718926 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.719003 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.719021 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.719053 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.719074 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:21Z","lastTransitionTime":"2025-10-14T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.789877 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.789933 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:21 crc kubenswrapper[5058]: E1014 06:48:21.790119 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:21 crc kubenswrapper[5058]: E1014 06:48:21.790279 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.822466 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.822545 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.822571 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.822605 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.822628 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:21Z","lastTransitionTime":"2025-10-14T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.926351 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.926426 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.926451 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.926489 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:21 crc kubenswrapper[5058]: I1014 06:48:21.926517 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:21Z","lastTransitionTime":"2025-10-14T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.031223 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.031294 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.031319 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.031352 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.031388 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.134678 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.134745 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.134769 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.134834 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.134865 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.238376 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.238442 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.238461 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.238488 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.238507 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.342142 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.342227 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.342250 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.342281 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.342303 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.445640 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.445755 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.445842 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.445876 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.445940 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.549491 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.549562 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.549586 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.549620 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.549644 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.653203 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.653275 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.653294 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.653323 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.653350 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.757306 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.757364 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.757376 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.757406 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.757420 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.789539 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.789639 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:22 crc kubenswrapper[5058]: E1014 06:48:22.789753 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:22 crc kubenswrapper[5058]: E1014 06:48:22.790112 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.811874 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:22Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.863260 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:22Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.875285 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.875353 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.875368 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.875391 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.875408 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.910743 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:22Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.928266 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:22Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.945568 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:22Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.974403 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:22Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.978284 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.978337 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.978353 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.978378 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.978390 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:22Z","lastTransitionTime":"2025-10-14T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:22 crc kubenswrapper[5058]: I1014 06:48:22.994750 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:22Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.014471 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.030346 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.045207 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.060499 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.075217 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.082094 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.082145 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.082158 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.082178 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.082191 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:23Z","lastTransitionTime":"2025-10-14T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.096478 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.109765 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.133496 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.152730 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.167273 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:23Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.185071 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.185111 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.185124 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.185145 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.185159 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:23Z","lastTransitionTime":"2025-10-14T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.288203 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.288232 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.288240 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.288254 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.288263 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:23Z","lastTransitionTime":"2025-10-14T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.391581 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.391642 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.391660 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.391687 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.391708 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:23Z","lastTransitionTime":"2025-10-14T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.414350 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:23 crc kubenswrapper[5058]: E1014 06:48:23.414544 5058 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:23 crc kubenswrapper[5058]: E1014 06:48:23.414617 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs podName:a70e631f-95b4-451e-821b-8b9297428934 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:39.414592347 +0000 UTC m=+67.325676193 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs") pod "network-metrics-daemon-ckdsj" (UID: "a70e631f-95b4-451e-821b-8b9297428934") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.495201 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.495269 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.495288 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.495316 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.495334 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:23Z","lastTransitionTime":"2025-10-14T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.598481 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.598526 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.598546 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.598569 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.598587 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:23Z","lastTransitionTime":"2025-10-14T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.701608 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.701679 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.701703 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.701734 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.701756 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:23Z","lastTransitionTime":"2025-10-14T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.789262 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.789297 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:23 crc kubenswrapper[5058]: E1014 06:48:23.789542 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:23 crc kubenswrapper[5058]: E1014 06:48:23.789679 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.805299 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.805357 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.805373 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.805399 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.805418 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:23Z","lastTransitionTime":"2025-10-14T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.909361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.909448 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.909475 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.909506 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:23 crc kubenswrapper[5058]: I1014 06:48:23.909531 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:23Z","lastTransitionTime":"2025-10-14T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.013460 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.013546 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.013564 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.013590 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.013610 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.116744 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.116806 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.116816 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.116834 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.116847 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.220060 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.220107 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.220119 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.220138 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.220152 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.322499 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.322553 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.322565 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.322582 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.322595 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.425067 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.425117 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.425128 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.425148 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.425166 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.528628 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.528701 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.528724 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.528758 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.528782 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.628350 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.628589 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.628698 5058 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.628732 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:48:56.628677121 +0000 UTC m=+84.539760967 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.628850 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:56.628779864 +0000 UTC m=+84.539863950 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.629056 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.629191 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.629305 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.629368 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.629492 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.629558 5058 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.629398 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.629634 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:56.629616888 +0000 UTC m=+84.540700734 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.629768 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.629891 5058 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.629624 5058 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.630059 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:56.630016219 +0000 UTC m=+84.541100235 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.630116 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:48:56.630097282 +0000 UTC m=+84.541181378 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.631277 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.631350 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.631373 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.631407 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.631435 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.735361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.735441 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.735465 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.735494 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.735512 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.789365 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.789690 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.789789 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:24 crc kubenswrapper[5058]: E1014 06:48:24.790187 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.839379 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.839429 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.839446 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.839471 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.839490 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.943254 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.943322 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.943340 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.943367 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:24 crc kubenswrapper[5058]: I1014 06:48:24.943386 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:24Z","lastTransitionTime":"2025-10-14T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.045480 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.045559 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.045579 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.045601 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.045614 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.149208 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.149263 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.149275 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.149293 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.149580 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.253026 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.253112 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.253129 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.253157 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.253178 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.356085 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.356169 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.356191 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.356221 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.356241 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.460700 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.460760 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.460778 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.460852 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.460874 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.564382 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.564458 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.564478 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.564505 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.564526 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.667602 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.668058 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.668254 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.668399 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.668592 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.771750 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.772122 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.772299 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.772471 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.772603 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.789145 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.789204 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:25 crc kubenswrapper[5058]: E1014 06:48:25.789330 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:25 crc kubenswrapper[5058]: E1014 06:48:25.789621 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.875458 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.875544 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.875565 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.876010 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.876171 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.979512 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.979570 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.979584 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.979606 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:25 crc kubenswrapper[5058]: I1014 06:48:25.979625 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:25Z","lastTransitionTime":"2025-10-14T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.082577 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.082612 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.082623 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.082644 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.082657 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:26Z","lastTransitionTime":"2025-10-14T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.186268 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.186331 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.186349 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.186375 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.186392 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:26Z","lastTransitionTime":"2025-10-14T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.289260 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.289665 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.289831 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.289975 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.290137 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:26Z","lastTransitionTime":"2025-10-14T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.393596 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.394015 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.394192 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.394361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.394601 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:26Z","lastTransitionTime":"2025-10-14T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.499017 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.499075 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.499100 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.499129 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.499150 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:26Z","lastTransitionTime":"2025-10-14T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.602790 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.603150 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.603349 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.603533 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.603831 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:26Z","lastTransitionTime":"2025-10-14T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.707129 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.707202 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.707221 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.707250 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.707270 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:26Z","lastTransitionTime":"2025-10-14T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.789361 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.789385 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:26 crc kubenswrapper[5058]: E1014 06:48:26.790189 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:26 crc kubenswrapper[5058]: E1014 06:48:26.790352 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.790699 5058 scope.go:117] "RemoveContainer" containerID="fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.810211 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.810272 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.810288 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.810311 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.810329 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:26Z","lastTransitionTime":"2025-10-14T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.914908 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.915281 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.915302 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.915332 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:26 crc kubenswrapper[5058]: I1014 06:48:26.915364 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:26Z","lastTransitionTime":"2025-10-14T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.020085 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.020142 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.020163 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.020189 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.020207 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.123317 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.123357 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.123365 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.123380 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.123390 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.226356 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.226569 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.226594 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.226662 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.226683 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.246779 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/1.log" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.257090 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.258469 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.277350 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.300007 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.324497 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.329500 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.329550 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.329563 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.329584 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.329601 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.362698 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.376690 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.397316 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.411574 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.422409 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.431707 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.431751 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.431766 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.431787 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.431822 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.434254 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.447682 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.460040 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.474270 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.498130 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.513183 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.534891 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.534962 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.534987 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.535024 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.535049 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.536957 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.551859 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.572323 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.637741 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.637835 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.637855 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.637881 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.637898 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.741419 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.741471 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.741484 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.741502 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.741514 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.790042 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.790113 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:27 crc kubenswrapper[5058]: E1014 06:48:27.790277 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:27 crc kubenswrapper[5058]: E1014 06:48:27.790440 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.844261 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.844325 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.844341 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.844365 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.844383 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.890705 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.905521 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.916508 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.936162 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.948424 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.948459 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.948467 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.948483 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.948494 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:27Z","lastTransitionTime":"2025-10-14T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.955722 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.969937 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:27 crc kubenswrapper[5058]: I1014 06:48:27.988719 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:27Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.004646 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.021222 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.038879 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.051307 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.051349 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.051361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.051380 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.051392 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.053689 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.067277 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.080757 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.097084 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.120372 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.131853 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.143863 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.153734 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.153827 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.153842 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.153862 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.153872 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.164836 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.179941 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.257111 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.257183 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.257201 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.257229 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.257250 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.263758 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/2.log" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.265163 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/1.log" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.269581 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca" exitCode=1 Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.269658 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.269772 5058 scope.go:117] "RemoveContainer" containerID="fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.272006 5058 scope.go:117] "RemoveContainer" containerID="b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca" Oct 14 06:48:28 crc kubenswrapper[5058]: E1014 06:48:28.272521 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.300568 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.318963 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.338771 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.356006 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.360822 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.360877 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.360894 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.360918 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.360935 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.377389 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.403347 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.424344 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.442660 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.458712 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.465066 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.465118 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.465130 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.465152 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.465164 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.476060 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.494862 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.513573 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.528005 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.556406 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb8f8678333d91d02b71046e977214786a32b002b695a8fc159b7754d06bfddb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:05Z\\\",\\\"message\\\":\\\" 6488 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1014 06:48:05.170421 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1014 06:48:05.170426 6488 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1014 06:48:05.170400 6488 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1014 06:48:05.170437 6488 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:05.170486 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"message\\\":\\\"n.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI1014 06:48:27.799250 6756 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 4.978422ms\\\\nI1014 06:48:27.799320 6756 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1014 06:48:27.799344 6756 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1014 06:48:27.799357 6756 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1014 06:48:27.799395 6756 factory.go:1336] Added *v1.Node event handler 7\\\\nI1014 06:48:27.799415 6756 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1014 06:48:27.799627 6756 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1014 06:48:27.799684 6756 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1014 06:48:27.799709 6756 ovnkube.go:599] Stopped ovnkube\\\\nI1014 06:48:27.799739 6756 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:27.799938 6756 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.568167 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.568211 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.568248 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.568286 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.568299 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.572220 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.586283 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.612127 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.627565 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:28Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.671704 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.671759 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.671776 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.671834 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.671856 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.775424 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.775538 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.775565 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.775600 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.775626 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.789418 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.789501 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:28 crc kubenswrapper[5058]: E1014 06:48:28.789602 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:28 crc kubenswrapper[5058]: E1014 06:48:28.789709 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.879645 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.879710 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.879727 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.879753 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.879772 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.983189 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.983258 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.983275 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.983304 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:28 crc kubenswrapper[5058]: I1014 06:48:28.983326 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:28Z","lastTransitionTime":"2025-10-14T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.086881 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.086952 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.086975 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.087006 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.087027 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.199427 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.199490 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.199506 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.199533 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.199551 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.276875 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/2.log" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.282405 5058 scope.go:117] "RemoveContainer" containerID="b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca" Oct 14 06:48:29 crc kubenswrapper[5058]: E1014 06:48:29.282739 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.302261 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.302363 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.302418 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.302445 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.302467 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.306852 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.325491 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.344659 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.360322 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.379977 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.405720 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.405768 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.405785 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.405845 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.405870 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.408744 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.426576 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.447269 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.453117 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.453188 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.453208 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.453239 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.453259 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.472189 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: E1014 06:48:29.474911 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.479773 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.479848 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.479866 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.479891 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.479910 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: E1014 06:48:29.502205 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.504786 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"message\\\":\\\"n.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI1014 06:48:27.799250 6756 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 4.978422ms\\\\nI1014 06:48:27.799320 6756 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1014 06:48:27.799344 6756 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1014 06:48:27.799357 6756 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1014 06:48:27.799395 6756 factory.go:1336] Added *v1.Node event handler 7\\\\nI1014 06:48:27.799415 6756 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1014 06:48:27.799627 6756 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1014 06:48:27.799684 6756 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1014 06:48:27.799709 6756 ovnkube.go:599] Stopped ovnkube\\\\nI1014 06:48:27.799739 6756 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:27.799938 6756 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.508922 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.509016 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.509035 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.509060 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.509080 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.525152 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: E1014 06:48:29.529644 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.534764 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.534858 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.534878 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.534904 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.534951 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.542047 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: E1014 06:48:29.551753 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.557372 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.557415 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.557428 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.557448 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.557462 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.565076 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: E1014 06:48:29.577411 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: E1014 06:48:29.577604 5058 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.585036 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.585110 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.585136 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.585168 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.585191 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.598097 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.617128 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.634373 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.653986 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.675295 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:29Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.689438 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.689486 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.689498 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.689523 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.689538 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.789507 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.789614 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:29 crc kubenswrapper[5058]: E1014 06:48:29.789918 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:29 crc kubenswrapper[5058]: E1014 06:48:29.790118 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.792504 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.792561 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.792582 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.792610 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.792629 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.897887 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.897951 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.897963 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.897985 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:29 crc kubenswrapper[5058]: I1014 06:48:29.898001 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:29Z","lastTransitionTime":"2025-10-14T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.002211 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.002266 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.002278 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.002298 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.002312 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.106113 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.106178 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.106195 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.106228 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.106250 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.210878 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.210958 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.210974 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.211003 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.211022 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.314700 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.314792 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.314853 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.314880 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.314901 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.418771 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.418872 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.418899 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.418931 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.418952 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.522976 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.523033 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.523050 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.523076 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.523093 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.626079 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.626146 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.626169 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.626199 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.626222 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.729226 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.729299 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.729315 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.729340 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.729357 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.789377 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.789524 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:30 crc kubenswrapper[5058]: E1014 06:48:30.789587 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:30 crc kubenswrapper[5058]: E1014 06:48:30.789789 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.832918 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.833000 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.833024 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.833051 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.833071 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.935921 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.935991 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.936011 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.936036 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:30 crc kubenswrapper[5058]: I1014 06:48:30.936054 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:30Z","lastTransitionTime":"2025-10-14T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.039395 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.039463 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.039482 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.039509 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.039528 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.142296 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.142389 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.142407 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.142428 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.142445 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.245402 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.245448 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.245460 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.245478 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.245500 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.347953 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.348003 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.348018 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.348039 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.348054 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.451759 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.451873 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.451892 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.451920 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.451945 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.555527 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.555595 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.555612 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.555634 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.555650 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.658966 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.659038 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.659062 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.659093 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.659113 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.764010 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.764122 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.764142 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.764166 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.764193 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.789763 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.789785 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:31 crc kubenswrapper[5058]: E1014 06:48:31.790031 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:31 crc kubenswrapper[5058]: E1014 06:48:31.790090 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.867472 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.867546 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.867564 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.867589 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.867606 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.971383 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.971427 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.971436 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.971451 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:31 crc kubenswrapper[5058]: I1014 06:48:31.971462 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:31Z","lastTransitionTime":"2025-10-14T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.074558 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.074626 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.074643 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.074668 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.074686 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:32Z","lastTransitionTime":"2025-10-14T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.177705 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.177770 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.177788 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.177841 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.177861 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:32Z","lastTransitionTime":"2025-10-14T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.281447 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.281499 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.281517 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.281541 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.281558 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:32Z","lastTransitionTime":"2025-10-14T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.386161 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.386222 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.386238 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.386263 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.386281 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:32Z","lastTransitionTime":"2025-10-14T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.489669 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.489750 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.489771 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.489837 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.489859 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:32Z","lastTransitionTime":"2025-10-14T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.594211 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.594271 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.594289 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.594313 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.594333 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:32Z","lastTransitionTime":"2025-10-14T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.698439 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.698883 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.699039 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.699189 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.699310 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:32Z","lastTransitionTime":"2025-10-14T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.789015 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:32 crc kubenswrapper[5058]: E1014 06:48:32.789227 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.789917 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:32 crc kubenswrapper[5058]: E1014 06:48:32.790041 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.802405 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.802675 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.803072 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.803434 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.803586 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:32Z","lastTransitionTime":"2025-10-14T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.814514 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:32Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.839955 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:32Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.861352 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:32Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.886423 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:32Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.907592 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.907658 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.907682 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.907713 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.907737 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:32Z","lastTransitionTime":"2025-10-14T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.908417 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:32Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.926331 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:32Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.946426 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:32Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.969133 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:32Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:32 crc kubenswrapper[5058]: I1014 06:48:32.987775 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:32Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.010520 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:33Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.011578 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.011734 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.011853 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.011983 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.012095 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.031026 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:33Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.054133 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:33Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.076467 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:33Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.112668 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"message\\\":\\\"n.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI1014 06:48:27.799250 6756 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 4.978422ms\\\\nI1014 06:48:27.799320 6756 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1014 06:48:27.799344 6756 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1014 06:48:27.799357 6756 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1014 06:48:27.799395 6756 factory.go:1336] Added *v1.Node event handler 7\\\\nI1014 06:48:27.799415 6756 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1014 06:48:27.799627 6756 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1014 06:48:27.799684 6756 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1014 06:48:27.799709 6756 ovnkube.go:599] Stopped ovnkube\\\\nI1014 06:48:27.799739 6756 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:27.799938 6756 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:33Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.116507 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.116584 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.116603 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.116632 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.116652 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.134191 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:33Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.154773 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:33Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.191554 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:33Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.210629 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:33Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.219880 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.219935 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.219949 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.219973 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.219989 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.322880 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.323395 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.323477 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.323554 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.323638 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.426934 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.426973 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.426988 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.427010 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.427023 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.534313 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.535005 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.535038 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.535078 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.535115 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.638983 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.639040 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.639057 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.639085 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.639105 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.741920 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.741987 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.742007 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.742032 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.742049 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.789330 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.789330 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:33 crc kubenswrapper[5058]: E1014 06:48:33.789574 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:33 crc kubenswrapper[5058]: E1014 06:48:33.789718 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.845326 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.845391 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.845413 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.845446 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.845467 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.949253 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.949317 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.949336 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.949361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:33 crc kubenswrapper[5058]: I1014 06:48:33.949378 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:33Z","lastTransitionTime":"2025-10-14T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.052507 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.052576 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.052594 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.052619 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.052637 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.156234 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.156311 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.156333 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.156366 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.156385 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.259933 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.260005 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.260028 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.260059 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.260083 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.364930 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.365007 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.365023 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.365048 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.365066 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.469524 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.469698 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.469726 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.469862 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.469890 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.573224 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.573273 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.573282 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.573298 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.573312 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.676395 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.676435 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.676445 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.676461 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.676474 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.778741 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.778776 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.778784 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.778818 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.778827 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.789366 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:34 crc kubenswrapper[5058]: E1014 06:48:34.789481 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.789537 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:34 crc kubenswrapper[5058]: E1014 06:48:34.789687 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.882399 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.882462 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.882479 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.882505 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.882523 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.986250 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.986315 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.986333 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.986357 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:34 crc kubenswrapper[5058]: I1014 06:48:34.986377 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:34Z","lastTransitionTime":"2025-10-14T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.090943 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.091020 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.091042 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.091073 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.091097 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:35Z","lastTransitionTime":"2025-10-14T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.194013 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.194073 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.194095 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.194123 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.194144 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:35Z","lastTransitionTime":"2025-10-14T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.296543 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.296601 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.296623 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.296648 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.296665 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:35Z","lastTransitionTime":"2025-10-14T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.399486 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.399561 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.399578 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.399605 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.399624 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:35Z","lastTransitionTime":"2025-10-14T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.516361 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.516426 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.516444 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.516469 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.516487 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:35Z","lastTransitionTime":"2025-10-14T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.620560 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.620616 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.620629 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.620648 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.620661 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:35Z","lastTransitionTime":"2025-10-14T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.724317 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.724378 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.724394 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.724418 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.724436 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:35Z","lastTransitionTime":"2025-10-14T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.789607 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.789666 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:35 crc kubenswrapper[5058]: E1014 06:48:35.789964 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:35 crc kubenswrapper[5058]: E1014 06:48:35.790099 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.828050 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.828111 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.828129 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.828154 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.828171 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:35Z","lastTransitionTime":"2025-10-14T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.931570 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.931618 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.931630 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.931649 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:35 crc kubenswrapper[5058]: I1014 06:48:35.931661 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:35Z","lastTransitionTime":"2025-10-14T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.035255 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.035343 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.035362 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.035388 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.035407 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.138208 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.138272 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.138283 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.138304 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.138318 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.240883 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.240949 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.240963 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.240989 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.241062 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.344413 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.344464 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.344475 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.344494 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.344507 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.453526 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.453621 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.453639 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.453664 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.453681 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.557434 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.557522 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.557546 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.557581 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.557607 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.660476 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.660561 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.660584 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.660611 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.660629 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.764369 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.764431 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.764448 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.764473 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.764491 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.789213 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:36 crc kubenswrapper[5058]: E1014 06:48:36.789373 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.789582 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:36 crc kubenswrapper[5058]: E1014 06:48:36.789632 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.866164 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.866209 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.866224 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.866245 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.866262 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.970442 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.970487 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.970495 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.970513 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:36 crc kubenswrapper[5058]: I1014 06:48:36.970524 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:36Z","lastTransitionTime":"2025-10-14T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.074649 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.074704 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.074716 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.074930 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.074943 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:37Z","lastTransitionTime":"2025-10-14T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.178435 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.178485 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.178494 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.178513 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.178526 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:37Z","lastTransitionTime":"2025-10-14T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.282092 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.282147 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.282158 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.282176 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.282187 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:37Z","lastTransitionTime":"2025-10-14T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.385537 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.385605 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.385624 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.385650 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.385670 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:37Z","lastTransitionTime":"2025-10-14T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.488676 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.488747 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.488765 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.488820 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.488849 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:37Z","lastTransitionTime":"2025-10-14T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.591783 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.591926 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.591952 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.591983 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.592008 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:37Z","lastTransitionTime":"2025-10-14T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.695591 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.695644 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.695663 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.695692 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.695722 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:37Z","lastTransitionTime":"2025-10-14T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.789565 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.789765 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:37 crc kubenswrapper[5058]: E1014 06:48:37.789906 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:37 crc kubenswrapper[5058]: E1014 06:48:37.790041 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.799279 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.799337 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.799352 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.799375 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.799389 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:37Z","lastTransitionTime":"2025-10-14T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.903172 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.903221 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.903233 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.903251 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:37 crc kubenswrapper[5058]: I1014 06:48:37.903263 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:37Z","lastTransitionTime":"2025-10-14T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.006165 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.006203 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.006213 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.006232 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.006244 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.109335 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.109405 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.109423 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.109450 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.109476 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.212176 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.212220 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.212229 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.212246 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.212258 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.316101 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.316155 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.316172 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.316196 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.316217 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.419104 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.419183 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.419201 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.419230 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.419250 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.522006 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.522048 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.522058 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.522075 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.522088 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.624691 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.624739 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.624750 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.624766 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.624780 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.727353 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.727403 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.727415 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.727437 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.727451 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.789776 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.789905 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:38 crc kubenswrapper[5058]: E1014 06:48:38.790020 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:38 crc kubenswrapper[5058]: E1014 06:48:38.790113 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.830123 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.830152 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.830159 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.830171 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.830180 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.932318 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.932346 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.932354 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.932367 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:38 crc kubenswrapper[5058]: I1014 06:48:38.932382 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:38Z","lastTransitionTime":"2025-10-14T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.035409 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.035458 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.035469 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.035487 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.035501 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.139003 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.139064 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.139077 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.139099 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.139112 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.242671 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.242724 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.242742 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.242768 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.242785 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.345723 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.345761 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.345774 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.345810 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.345823 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.415962 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.416102 5058 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.416151 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs podName:a70e631f-95b4-451e-821b-8b9297428934 nodeName:}" failed. No retries permitted until 2025-10-14 06:49:11.41613603 +0000 UTC m=+99.327219836 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs") pod "network-metrics-daemon-ckdsj" (UID: "a70e631f-95b4-451e-821b-8b9297428934") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.447624 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.447655 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.447666 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.447716 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.447730 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.549857 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.549886 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.549894 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.549907 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.549916 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.652047 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.652283 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.652377 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.652475 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.652565 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.722578 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.722638 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.722648 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.722669 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.722681 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.740577 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:39Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.760981 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.761034 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.761044 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.761062 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.761075 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.784632 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:39Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.789584 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.789634 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.789726 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.789833 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.792010 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.792058 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.792077 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.792109 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.792126 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.815293 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:39Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.819419 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.819482 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.819508 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.819531 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.819542 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.836133 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:39Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.847213 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.847260 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.847272 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.847291 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.847306 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.861089 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:39Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:39 crc kubenswrapper[5058]: E1014 06:48:39.861220 5058 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.862968 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.863046 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.863068 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.863098 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.863119 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.966135 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.966227 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.966271 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.966313 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:39 crc kubenswrapper[5058]: I1014 06:48:39.966340 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:39Z","lastTransitionTime":"2025-10-14T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.070674 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.070750 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.070768 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.070820 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.070841 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.174497 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.174559 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.174571 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.174594 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.174612 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.277861 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.277918 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.277929 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.277948 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.277960 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.380546 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.380602 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.380613 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.380628 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.380641 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.483250 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.483310 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.483322 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.483340 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.483353 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.585847 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.585907 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.585921 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.585943 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.585959 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.688296 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.688344 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.688354 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.688378 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.688392 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.789122 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:40 crc kubenswrapper[5058]: E1014 06:48:40.789579 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.789179 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:40 crc kubenswrapper[5058]: E1014 06:48:40.790083 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.791694 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.791775 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.791789 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.791839 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.791854 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.895086 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.895157 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.895182 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.895214 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.895237 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.998547 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.998627 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.998640 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.998665 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:40 crc kubenswrapper[5058]: I1014 06:48:40.998684 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:40Z","lastTransitionTime":"2025-10-14T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.101722 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.101815 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.101828 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.101876 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.101886 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:41Z","lastTransitionTime":"2025-10-14T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.204971 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.205034 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.205046 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.205067 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.205080 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:41Z","lastTransitionTime":"2025-10-14T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.308170 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.308252 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.308270 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.308303 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.308326 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:41Z","lastTransitionTime":"2025-10-14T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.330676 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/0.log" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.330766 5058 generic.go:334] "Generic (PLEG): container finished" podID="1288bab5-7372-4acc-963c-6232b27a7975" containerID="4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23" exitCode=1 Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.330899 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csl4q" event={"ID":"1288bab5-7372-4acc-963c-6232b27a7975","Type":"ContainerDied","Data":"4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.331789 5058 scope.go:117] "RemoveContainer" containerID="4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.352969 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.368573 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.387144 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:40Z\\\",\\\"message\\\":\\\"2025-10-14T06:47:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4\\\\n2025-10-14T06:47:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4 to /host/opt/cni/bin/\\\\n2025-10-14T06:47:55Z [verbose] multus-daemon started\\\\n2025-10-14T06:47:55Z [verbose] Readiness Indicator file check\\\\n2025-10-14T06:48:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.410620 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.410921 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.410931 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.410948 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.410958 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:41Z","lastTransitionTime":"2025-10-14T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.414131 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.432560 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.451100 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.474585 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.491709 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.507363 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.513086 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.513109 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.513118 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.513134 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.513145 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:41Z","lastTransitionTime":"2025-10-14T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.520431 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.541215 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.553697 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.583124 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"message\\\":\\\"n.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI1014 06:48:27.799250 6756 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 4.978422ms\\\\nI1014 06:48:27.799320 6756 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1014 06:48:27.799344 6756 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1014 06:48:27.799357 6756 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1014 06:48:27.799395 6756 factory.go:1336] Added *v1.Node event handler 7\\\\nI1014 06:48:27.799415 6756 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1014 06:48:27.799627 6756 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1014 06:48:27.799684 6756 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1014 06:48:27.799709 6756 ovnkube.go:599] Stopped ovnkube\\\\nI1014 06:48:27.799739 6756 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:27.799938 6756 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.616082 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.616138 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.616154 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.616178 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.616195 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:41Z","lastTransitionTime":"2025-10-14T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.618740 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.637386 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.657205 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.676441 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.695185 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:41Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.719143 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.719216 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.719233 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.719260 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.719276 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:41Z","lastTransitionTime":"2025-10-14T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.789281 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.789420 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:41 crc kubenswrapper[5058]: E1014 06:48:41.789455 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:41 crc kubenswrapper[5058]: E1014 06:48:41.789670 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.824222 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.824304 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.824323 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.824349 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.824368 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:41Z","lastTransitionTime":"2025-10-14T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.927673 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.927737 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.927758 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.927786 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:41 crc kubenswrapper[5058]: I1014 06:48:41.927848 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:41Z","lastTransitionTime":"2025-10-14T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.031598 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.031646 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.031665 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.031686 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.031701 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.134618 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.134678 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.134699 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.134726 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.134743 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.237432 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.237527 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.237581 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.237612 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.237631 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.336987 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/0.log" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.337046 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csl4q" event={"ID":"1288bab5-7372-4acc-963c-6232b27a7975","Type":"ContainerStarted","Data":"a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.339709 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.339765 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.339786 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.339832 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.339849 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.354365 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.365336 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.378319 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.399487 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.413599 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.426048 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.440282 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.444233 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.444296 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.444305 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.444327 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.444339 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.459190 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:40Z\\\",\\\"message\\\":\\\"2025-10-14T06:47:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4\\\\n2025-10-14T06:47:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4 to /host/opt/cni/bin/\\\\n2025-10-14T06:47:55Z [verbose] multus-daemon started\\\\n2025-10-14T06:47:55Z [verbose] Readiness Indicator file check\\\\n2025-10-14T06:48:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.474765 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.486301 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.501648 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.521133 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.544063 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"message\\\":\\\"n.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI1014 06:48:27.799250 6756 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 4.978422ms\\\\nI1014 06:48:27.799320 6756 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1014 06:48:27.799344 6756 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1014 06:48:27.799357 6756 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1014 06:48:27.799395 6756 factory.go:1336] Added *v1.Node event handler 7\\\\nI1014 06:48:27.799415 6756 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1014 06:48:27.799627 6756 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1014 06:48:27.799684 6756 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1014 06:48:27.799709 6756 ovnkube.go:599] Stopped ovnkube\\\\nI1014 06:48:27.799739 6756 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:27.799938 6756 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.546238 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.546267 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.546276 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.546290 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.546299 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.563251 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.585236 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.608135 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.646886 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.650965 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.651110 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.651195 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.651280 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.651366 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.666402 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.754168 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.754215 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.754227 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.754245 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.754254 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.789286 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:42 crc kubenswrapper[5058]: E1014 06:48:42.789387 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.789958 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:42 crc kubenswrapper[5058]: E1014 06:48:42.790181 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.804290 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:40Z\\\",\\\"message\\\":\\\"2025-10-14T06:47:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4\\\\n2025-10-14T06:47:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4 to /host/opt/cni/bin/\\\\n2025-10-14T06:47:55Z [verbose] multus-daemon started\\\\n2025-10-14T06:47:55Z [verbose] Readiness Indicator file check\\\\n2025-10-14T06:48:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.821791 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.832562 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.844637 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.856131 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.856169 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.856179 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.856194 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.856206 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.860416 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.876786 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.896542 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.909020 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.925135 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.942076 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.959429 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.959488 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.959509 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.959534 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.959552 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:42Z","lastTransitionTime":"2025-10-14T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.966547 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"message\\\":\\\"n.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI1014 06:48:27.799250 6756 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 4.978422ms\\\\nI1014 06:48:27.799320 6756 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1014 06:48:27.799344 6756 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1014 06:48:27.799357 6756 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1014 06:48:27.799395 6756 factory.go:1336] Added *v1.Node event handler 7\\\\nI1014 06:48:27.799415 6756 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1014 06:48:27.799627 6756 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1014 06:48:27.799684 6756 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1014 06:48:27.799709 6756 ovnkube.go:599] Stopped ovnkube\\\\nI1014 06:48:27.799739 6756 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:27.799938 6756 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.982950 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:42 crc kubenswrapper[5058]: I1014 06:48:42.996729 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:42Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.018653 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:43Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.039134 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:43Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.053871 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:43Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.062033 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.062058 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.062072 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.062090 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.062102 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.068773 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:43Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.080067 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:43Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.164283 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.164442 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.164507 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.164589 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.164672 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.266313 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.266614 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.266786 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.266990 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.267149 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.368692 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.369921 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.370135 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.370323 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.370466 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.476287 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.476352 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.476371 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.476398 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.476425 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.579267 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.579510 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.579571 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.579661 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.579754 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.682269 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.682300 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.682309 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.682323 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.682335 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.785472 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.785515 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.785525 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.785541 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.785554 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.788951 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.788966 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.789590 5058 scope.go:117] "RemoveContainer" containerID="b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca" Oct 14 06:48:43 crc kubenswrapper[5058]: E1014 06:48:43.789748 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" Oct 14 06:48:43 crc kubenswrapper[5058]: E1014 06:48:43.790125 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:43 crc kubenswrapper[5058]: E1014 06:48:43.790318 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.889003 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.889043 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.889051 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.889067 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.889075 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.992622 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.992664 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.992674 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.992690 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:43 crc kubenswrapper[5058]: I1014 06:48:43.992699 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:43Z","lastTransitionTime":"2025-10-14T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.096467 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.096508 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.096520 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.096537 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.096549 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:44Z","lastTransitionTime":"2025-10-14T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.198940 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.199006 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.199021 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.199080 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.199101 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:44Z","lastTransitionTime":"2025-10-14T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.301980 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.302014 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.302022 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.302034 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.302043 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:44Z","lastTransitionTime":"2025-10-14T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.404872 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.404937 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.404953 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.404979 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.404997 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:44Z","lastTransitionTime":"2025-10-14T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.506536 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.506574 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.506583 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.506596 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.506604 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:44Z","lastTransitionTime":"2025-10-14T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.609091 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.609134 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.609147 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.609163 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.609175 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:44Z","lastTransitionTime":"2025-10-14T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.711815 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.711874 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.711887 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.711903 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.711917 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:44Z","lastTransitionTime":"2025-10-14T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.789974 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.790075 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:44 crc kubenswrapper[5058]: E1014 06:48:44.790159 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:44 crc kubenswrapper[5058]: E1014 06:48:44.790242 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.814548 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.814608 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.814626 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.814649 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.814668 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:44Z","lastTransitionTime":"2025-10-14T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.917418 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.917487 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.917507 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.917534 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:44 crc kubenswrapper[5058]: I1014 06:48:44.917552 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:44Z","lastTransitionTime":"2025-10-14T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.020859 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.020942 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.020964 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.020990 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.021010 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.124195 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.124253 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.124263 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.124278 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.124288 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.227449 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.227528 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.227551 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.227579 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.227600 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.330586 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.330646 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.330663 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.330688 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.330706 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.433417 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.433468 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.433479 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.433499 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.433511 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.535988 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.536033 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.536048 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.536063 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.536077 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.638984 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.639054 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.639078 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.639110 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.639133 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.742558 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.742624 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.742641 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.742673 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.742692 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.789298 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.789311 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:45 crc kubenswrapper[5058]: E1014 06:48:45.789495 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:45 crc kubenswrapper[5058]: E1014 06:48:45.789621 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.852815 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.853425 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.853439 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.853461 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.853476 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.956660 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.956749 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.956772 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.956841 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:45 crc kubenswrapper[5058]: I1014 06:48:45.956864 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:45Z","lastTransitionTime":"2025-10-14T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.058780 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.058861 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.058872 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.058891 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.058903 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:46Z","lastTransitionTime":"2025-10-14T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.161239 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.161313 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.161331 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.161358 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.161378 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:46Z","lastTransitionTime":"2025-10-14T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.265083 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.265148 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.265158 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.265179 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.265191 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:46Z","lastTransitionTime":"2025-10-14T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.367775 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.367871 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.367890 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.367916 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.367940 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:46Z","lastTransitionTime":"2025-10-14T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.471575 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.471650 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.471672 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.471702 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.471724 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:46Z","lastTransitionTime":"2025-10-14T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.574775 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.574897 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.574922 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.574955 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.574978 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:46Z","lastTransitionTime":"2025-10-14T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.678628 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.678711 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.678743 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.678781 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.678886 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:46Z","lastTransitionTime":"2025-10-14T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.782281 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.782358 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.782372 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.782392 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.782403 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:46Z","lastTransitionTime":"2025-10-14T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.788990 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.789017 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:46 crc kubenswrapper[5058]: E1014 06:48:46.789225 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:46 crc kubenswrapper[5058]: E1014 06:48:46.789763 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.886102 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.886179 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.886205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.886240 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:46 crc kubenswrapper[5058]: I1014 06:48:46.886266 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:46Z","lastTransitionTime":"2025-10-14T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.001662 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.001729 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.001749 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.001779 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.001834 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.105155 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.105202 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.105221 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.105244 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.105264 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.208467 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.208511 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.208523 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.208544 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.208558 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.310993 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.311039 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.311047 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.311063 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.311076 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.414245 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.414304 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.414315 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.414340 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.414351 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.518341 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.518396 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.518406 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.518425 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.518436 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.621930 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.621995 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.622008 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.622035 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.622050 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.725670 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.725712 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.725721 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.725736 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.725746 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.789415 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.789615 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:47 crc kubenswrapper[5058]: E1014 06:48:47.789849 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:47 crc kubenswrapper[5058]: E1014 06:48:47.789965 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.828240 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.828288 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.828304 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.828325 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.828343 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.931529 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.931599 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.931621 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.931646 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:47 crc kubenswrapper[5058]: I1014 06:48:47.931666 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:47Z","lastTransitionTime":"2025-10-14T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.035300 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.035368 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.035389 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.035418 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.035442 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.139104 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.139154 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.139171 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.139194 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.139212 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.242001 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.242064 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.242081 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.242104 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.242124 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.345445 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.345513 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.345526 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.345549 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.345565 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.449432 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.449480 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.449490 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.449508 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.449520 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.552428 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.552486 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.552499 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.552521 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.552537 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.655217 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.655260 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.655269 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.655283 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.655292 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.759123 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.759189 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.759202 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.759228 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.759244 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.789635 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.789858 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:48 crc kubenswrapper[5058]: E1014 06:48:48.790063 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:48 crc kubenswrapper[5058]: E1014 06:48:48.790158 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.862519 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.862560 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.862569 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.862601 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.862610 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.965577 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.965641 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.965660 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.965684 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:48 crc kubenswrapper[5058]: I1014 06:48:48.965703 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:48Z","lastTransitionTime":"2025-10-14T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.071879 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.071947 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.071965 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.071989 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.072005 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:49Z","lastTransitionTime":"2025-10-14T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.175347 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.175403 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.175421 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.175446 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.175465 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:49Z","lastTransitionTime":"2025-10-14T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.279026 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.279067 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.279078 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.279096 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.279107 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:49Z","lastTransitionTime":"2025-10-14T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.381887 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.381960 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.381983 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.382012 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.382036 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:49Z","lastTransitionTime":"2025-10-14T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.485848 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.485909 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.485926 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.485978 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.485999 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:49Z","lastTransitionTime":"2025-10-14T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.589500 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.590354 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.590379 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.590405 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.590423 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:49Z","lastTransitionTime":"2025-10-14T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.694621 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.694686 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.695176 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.695223 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.695241 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:49Z","lastTransitionTime":"2025-10-14T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.789240 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.789277 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:49 crc kubenswrapper[5058]: E1014 06:48:49.789490 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:49 crc kubenswrapper[5058]: E1014 06:48:49.789652 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.797925 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.797984 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.798009 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.798036 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.798061 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:49Z","lastTransitionTime":"2025-10-14T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.900887 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.901013 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.901041 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.901073 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:49 crc kubenswrapper[5058]: I1014 06:48:49.901094 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:49Z","lastTransitionTime":"2025-10-14T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.004105 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.004195 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.004215 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.004249 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.004273 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.106841 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.106916 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.106932 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.106954 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.106971 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.156750 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.156839 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.156857 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.156882 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.156902 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: E1014 06:48:50.175083 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:50Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.179833 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.179875 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.179883 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.179899 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.179909 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: E1014 06:48:50.201107 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:50Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.206698 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.206782 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.206842 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.206872 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.206894 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: E1014 06:48:50.228620 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:50Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.235484 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.235559 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.235580 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.235616 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.235637 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: E1014 06:48:50.258220 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:50Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.263524 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.263568 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.263587 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.263609 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.263625 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: E1014 06:48:50.284400 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:50Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:50 crc kubenswrapper[5058]: E1014 06:48:50.284566 5058 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.286647 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.286725 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.286744 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.286764 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.286779 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.389267 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.389313 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.389343 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.389357 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.389367 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.493704 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.493747 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.493763 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.493784 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.493827 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.597524 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.597589 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.597606 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.597632 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.597651 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.702131 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.702200 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.702219 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.702244 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.702264 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.789227 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.789416 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:50 crc kubenswrapper[5058]: E1014 06:48:50.789682 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:50 crc kubenswrapper[5058]: E1014 06:48:50.789873 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.805639 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.805690 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.805755 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.805779 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.805817 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.908557 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.908618 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.908636 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.908661 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:50 crc kubenswrapper[5058]: I1014 06:48:50.908678 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:50Z","lastTransitionTime":"2025-10-14T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.012031 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.012373 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.012524 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.012676 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.012846 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.117516 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.117581 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.117598 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.117623 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.117641 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.220124 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.220189 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.220210 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.220235 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.220254 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.323299 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.323622 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.323749 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.323919 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.324054 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.427372 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.427450 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.427475 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.427508 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.427535 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.530753 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.530869 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.530895 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.530926 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.530948 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.633473 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.633549 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.633573 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.633603 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.633625 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.736438 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.736508 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.736528 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.736558 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.736577 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.789338 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.789409 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:51 crc kubenswrapper[5058]: E1014 06:48:51.790016 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:51 crc kubenswrapper[5058]: E1014 06:48:51.790111 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.839711 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.839835 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.839866 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.839895 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.839915 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.943000 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.943056 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.943075 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.943098 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:51 crc kubenswrapper[5058]: I1014 06:48:51.943116 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:51Z","lastTransitionTime":"2025-10-14T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.046085 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.046151 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.046168 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.046193 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.046210 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.149291 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.149362 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.149387 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.149411 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.149433 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.251538 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.251618 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.251639 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.251669 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.251691 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.354907 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.354961 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.354977 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.354998 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.355015 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.458355 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.458394 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.458403 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.458416 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.458425 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.562121 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.562197 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.562220 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.562249 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.562269 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.665302 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.665365 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.665382 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.665408 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.665424 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.768547 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.768574 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.768581 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.768593 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.768601 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.789497 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.789527 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:52 crc kubenswrapper[5058]: E1014 06:48:52.789711 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:52 crc kubenswrapper[5058]: E1014 06:48:52.789884 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.819213 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.837693 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.855318 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.871026 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.871097 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.871137 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.871167 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.871192 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.871594 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.885139 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.900889 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.920237 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.938738 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.956344 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.973351 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.974677 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.974725 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.974738 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.974755 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.974767 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:52Z","lastTransitionTime":"2025-10-14T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:52 crc kubenswrapper[5058]: I1014 06:48:52.987180 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:52Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.005373 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:40Z\\\",\\\"message\\\":\\\"2025-10-14T06:47:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4\\\\n2025-10-14T06:47:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4 to /host/opt/cni/bin/\\\\n2025-10-14T06:47:55Z [verbose] multus-daemon started\\\\n2025-10-14T06:47:55Z [verbose] Readiness Indicator file check\\\\n2025-10-14T06:48:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.026563 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.051335 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.068087 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.077128 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.077186 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.077203 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.077226 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.077245 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:53Z","lastTransitionTime":"2025-10-14T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.097948 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"message\\\":\\\"n.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI1014 06:48:27.799250 6756 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 4.978422ms\\\\nI1014 06:48:27.799320 6756 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1014 06:48:27.799344 6756 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1014 06:48:27.799357 6756 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1014 06:48:27.799395 6756 factory.go:1336] Added *v1.Node event handler 7\\\\nI1014 06:48:27.799415 6756 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1014 06:48:27.799627 6756 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1014 06:48:27.799684 6756 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1014 06:48:27.799709 6756 ovnkube.go:599] Stopped ovnkube\\\\nI1014 06:48:27.799739 6756 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:27.799938 6756 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.114632 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.133017 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:53Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.180037 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.180113 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.180131 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.180159 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.180177 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:53Z","lastTransitionTime":"2025-10-14T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.283025 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.283083 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.283100 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.283124 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.283144 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:53Z","lastTransitionTime":"2025-10-14T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.386507 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.386913 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.387087 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.387243 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.387392 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:53Z","lastTransitionTime":"2025-10-14T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.491338 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.491416 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.491434 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.491461 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.491480 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:53Z","lastTransitionTime":"2025-10-14T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.595006 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.595188 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.595223 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.595255 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.595281 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:53Z","lastTransitionTime":"2025-10-14T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.697711 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.697784 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.697814 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.697834 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.697847 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:53Z","lastTransitionTime":"2025-10-14T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.789619 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:53 crc kubenswrapper[5058]: E1014 06:48:53.789786 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.789961 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:53 crc kubenswrapper[5058]: E1014 06:48:53.790100 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.800547 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.800644 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.800665 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.800700 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.800720 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:53Z","lastTransitionTime":"2025-10-14T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.903586 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.903703 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.903725 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.903783 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:53 crc kubenswrapper[5058]: I1014 06:48:53.903837 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:53Z","lastTransitionTime":"2025-10-14T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.006001 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.006045 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.006057 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.006074 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.006087 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.109027 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.109085 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.109102 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.109128 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.109145 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.212184 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.212256 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.212273 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.212296 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.212313 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.324626 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.324691 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.324715 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.324745 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.324767 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.427529 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.427569 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.427586 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.427605 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.427617 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.530663 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.530740 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.530767 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.530832 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.530859 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.634571 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.634632 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.634653 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.634681 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.634702 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.737614 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.737669 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.737694 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.737717 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.737734 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.788968 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.789025 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:54 crc kubenswrapper[5058]: E1014 06:48:54.789173 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:54 crc kubenswrapper[5058]: E1014 06:48:54.789221 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.840539 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.840572 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.840580 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.840592 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.840619 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.943593 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.943669 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.943689 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.943719 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:54 crc kubenswrapper[5058]: I1014 06:48:54.943740 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:54Z","lastTransitionTime":"2025-10-14T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.046277 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.046331 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.046348 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.046373 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.046391 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.149087 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.149144 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.149165 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.149191 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.149210 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.252725 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.252771 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.252782 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.252827 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.252844 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.355049 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.355114 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.355132 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.355156 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.355173 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.459608 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.459673 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.459692 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.459717 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.459737 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.563036 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.563099 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.563120 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.563146 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.563165 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.666468 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.666525 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.666542 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.666564 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.666580 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.768917 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.768982 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.768998 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.769020 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.769039 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.788878 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.788955 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:55 crc kubenswrapper[5058]: E1014 06:48:55.789059 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:55 crc kubenswrapper[5058]: E1014 06:48:55.789141 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.872286 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.872349 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.872365 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.872388 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.872404 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.975112 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.975166 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.975182 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.975205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:55 crc kubenswrapper[5058]: I1014 06:48:55.975220 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:55Z","lastTransitionTime":"2025-10-14T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.078893 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.078954 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.078970 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.078996 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.079016 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:56Z","lastTransitionTime":"2025-10-14T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.181507 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.181582 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.181605 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.181636 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.181667 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:56Z","lastTransitionTime":"2025-10-14T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.284764 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.284843 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.284861 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.284884 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.284901 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:56Z","lastTransitionTime":"2025-10-14T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.387951 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.388012 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.388028 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.388051 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.388068 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:56Z","lastTransitionTime":"2025-10-14T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.490605 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.490655 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.490670 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.490690 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.490704 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:56Z","lastTransitionTime":"2025-10-14T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.593765 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.593855 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.593873 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.593898 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.593917 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:56Z","lastTransitionTime":"2025-10-14T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.696954 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.697010 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.697027 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.697050 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.697068 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:56Z","lastTransitionTime":"2025-10-14T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.705460 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.705652 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.705673 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.705641086 +0000 UTC m=+148.616724932 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.705748 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.705791 5058 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.705850 5058 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.705890 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.705870253 +0000 UTC m=+148.616954099 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.705827 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.705915 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.705903804 +0000 UTC m=+148.616987640 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.705947 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.705971 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.705996 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.706015 5058 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.706082 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.706057968 +0000 UTC m=+148.617141814 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.706095 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.706116 5058 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.706133 5058 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.706191 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.706175781 +0000 UTC m=+148.617259617 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.789451 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.789470 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.789725 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:56 crc kubenswrapper[5058]: E1014 06:48:56.789869 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.799553 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.799600 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.799617 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.799645 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.799664 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:56Z","lastTransitionTime":"2025-10-14T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.902613 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.902659 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.902676 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.902698 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:56 crc kubenswrapper[5058]: I1014 06:48:56.902715 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:56Z","lastTransitionTime":"2025-10-14T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.006323 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.006647 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.006659 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.006676 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.006687 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.109744 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.109856 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.109881 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.109919 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.109941 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.213176 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.213253 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.213311 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.213340 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.213364 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.316762 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.316851 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.316868 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.316891 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.316911 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.419768 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.420134 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.420272 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.420457 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.420617 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.523419 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.523527 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.523549 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.523573 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.523590 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.628295 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.628381 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.628408 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.628440 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.628473 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.732156 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.732231 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.732248 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.732277 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.732298 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.790015 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.790045 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:57 crc kubenswrapper[5058]: E1014 06:48:57.790256 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:57 crc kubenswrapper[5058]: E1014 06:48:57.790556 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.791757 5058 scope.go:117] "RemoveContainer" containerID="b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.835555 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.835949 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.835981 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.836014 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.836038 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.938479 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.938540 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.938556 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.938580 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:57 crc kubenswrapper[5058]: I1014 06:48:57.938601 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:57Z","lastTransitionTime":"2025-10-14T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.043590 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.043633 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.043664 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.043682 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.043699 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.146130 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.146177 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.146186 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.146201 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.146210 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.249423 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.249455 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.249466 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.249481 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.249493 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.352222 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.352262 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.352273 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.352288 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.352299 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.393042 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/2.log" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.397368 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.397792 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.416318 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.430747 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.443743 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.455658 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.455726 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.455751 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.455781 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.455838 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.460624 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.474903 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:40Z\\\",\\\"message\\\":\\\"2025-10-14T06:47:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4\\\\n2025-10-14T06:47:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4 to /host/opt/cni/bin/\\\\n2025-10-14T06:47:55Z [verbose] multus-daemon started\\\\n2025-10-14T06:47:55Z [verbose] Readiness Indicator file check\\\\n2025-10-14T06:48:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.491897 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.507076 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.519899 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.536936 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.550054 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.558526 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.558579 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.558598 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.558622 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.558638 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.563953 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.576613 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.594315 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.608872 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.638414 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"message\\\":\\\"n.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI1014 06:48:27.799250 6756 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 4.978422ms\\\\nI1014 06:48:27.799320 6756 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1014 06:48:27.799344 6756 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1014 06:48:27.799357 6756 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1014 06:48:27.799395 6756 factory.go:1336] Added *v1.Node event handler 7\\\\nI1014 06:48:27.799415 6756 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1014 06:48:27.799627 6756 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1014 06:48:27.799684 6756 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1014 06:48:27.799709 6756 ovnkube.go:599] Stopped ovnkube\\\\nI1014 06:48:27.799739 6756 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:27.799938 6756 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.654561 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.661190 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.661230 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.661240 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.661256 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.661267 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.689163 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.710629 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:58Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.764515 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.764582 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.764600 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.764625 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.764642 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.789673 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.789864 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:48:58 crc kubenswrapper[5058]: E1014 06:48:58.789981 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:48:58 crc kubenswrapper[5058]: E1014 06:48:58.790143 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.807985 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.867885 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.867917 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.867928 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.867943 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.867953 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.971164 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.971216 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.971233 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.971257 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:58 crc kubenswrapper[5058]: I1014 06:48:58.971274 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:58Z","lastTransitionTime":"2025-10-14T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.074144 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.074204 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.074221 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.074246 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.074265 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:59Z","lastTransitionTime":"2025-10-14T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.177256 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.177312 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.177328 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.177349 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.177367 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:59Z","lastTransitionTime":"2025-10-14T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.280263 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.280310 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.280327 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.280352 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.280369 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:59Z","lastTransitionTime":"2025-10-14T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.384038 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.384106 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.384125 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.384150 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.384168 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:59Z","lastTransitionTime":"2025-10-14T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.402936 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/3.log" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.403623 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/2.log" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.406716 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" exitCode=1 Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.406846 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.406940 5058 scope.go:117] "RemoveContainer" containerID="b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.408022 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:48:59 crc kubenswrapper[5058]: E1014 06:48:59.408288 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.438589 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.453471 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.473188 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.487639 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.487683 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.487699 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.487720 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.487737 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:59Z","lastTransitionTime":"2025-10-14T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.491388 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.510541 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.524765 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.545490 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:40Z\\\",\\\"message\\\":\\\"2025-10-14T06:47:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4\\\\n2025-10-14T06:47:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4 to /host/opt/cni/bin/\\\\n2025-10-14T06:47:55Z [verbose] multus-daemon started\\\\n2025-10-14T06:47:55Z [verbose] Readiness Indicator file check\\\\n2025-10-14T06:48:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.564979 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.584341 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.590236 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.590317 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.590341 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.590373 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.590398 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:59Z","lastTransitionTime":"2025-10-14T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.601358 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.622116 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6bc1268f8f5252c7f54bbd780c896a5f729d20ecbdbafe4c9812de9a4861dca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"message\\\":\\\"n.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI1014 06:48:27.799250 6756 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 4.978422ms\\\\nI1014 06:48:27.799320 6756 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1014 06:48:27.799344 6756 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1014 06:48:27.799357 6756 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1014 06:48:27.799395 6756 factory.go:1336] Added *v1.Node event handler 7\\\\nI1014 06:48:27.799415 6756 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1014 06:48:27.799627 6756 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1014 06:48:27.799684 6756 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1014 06:48:27.799709 6756 ovnkube.go:599] Stopped ovnkube\\\\nI1014 06:48:27.799739 6756 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 06:48:27.799938 6756 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:58Z\\\",\\\"message\\\":\\\"812217 7132 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 06:48:58.812290 7132 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 06:48:58.812479 7132 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 06:48:58.812782 7132 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 06:48:58.812843 7132 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 06:48:58.812924 7132 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1014 06:48:58.813124 7132 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.636192 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.647927 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.659855 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f76cb49-bc77-4e8c-9793-45abd25399ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec3190c76ec9e141af1d35f78233b91af731bd38bcb4ed6da5413e0d4171b404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5fb7c244f242dd1d99b4fd2c24a96c10367ea8c432d07bde7b1569fae37c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f5fb7c244f242dd1d99b4fd2c24a96c10367ea8c432d07bde7b1569fae37c36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.688517 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.692684 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.692842 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.692945 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.693059 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.693177 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:59Z","lastTransitionTime":"2025-10-14T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.704645 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.718261 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.738957 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.757881 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:48:59Z is after 2025-08-24T17:21:41Z" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.789482 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.789595 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:48:59 crc kubenswrapper[5058]: E1014 06:48:59.789698 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:48:59 crc kubenswrapper[5058]: E1014 06:48:59.789851 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.796617 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.796724 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.796755 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.796782 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.796841 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:59Z","lastTransitionTime":"2025-10-14T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.899643 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.900080 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.900097 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.900126 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:48:59 crc kubenswrapper[5058]: I1014 06:48:59.900144 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:48:59Z","lastTransitionTime":"2025-10-14T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.003309 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.003417 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.003464 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.003494 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.003515 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.106454 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.106503 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.106520 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.106544 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.106565 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.210039 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.210098 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.210118 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.210142 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.210160 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.313712 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.313761 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.313777 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.313833 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.313852 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.416101 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.416177 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.416194 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.416220 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.416242 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.416951 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/3.log" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.421971 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:49:00 crc kubenswrapper[5058]: E1014 06:49:00.422302 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.442327 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.466392 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.486183 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64184db4-5b6d-4aa8-b780-c9f6163af3d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c907512eb6913018f2854d119dc4b0c39fa2a1563de60a3e0126edb86960287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9x48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q5fhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.511704 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae5798c9-200b-4801-8cf2-750b1394ff5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://146b4c7e6c0367cabd7cfe1d12ec8e0287da0521855d1b98baba198599d17b57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4425dd1641607013e477806cff4787e8d0df9f6be849e99c278009d6332ecf35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148391d205e20cfa1f7d165ec58685d4476792b99a3475b48e4ae2e718c6c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fec7bd5739fc9c1aaf71c9da25c7368d0ffdd4bb0989c3477bab48cf2de72ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede1b41469aa1a76bbf3f6fe35ef2c8fd063bf08f72c2e761cf39dea19e18631\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d7d4ee16711e39fc1e00ef7bac9460d3d17b07944b20cd19ef39a22b6f1d611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a1887f5fb8ade5e808699c14c145dcdf67bab6e402bc0ba57b05adc8a375c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjkrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhxzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.518783 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.518878 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.518904 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.518933 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.518953 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.529157 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j7fmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4339a8-d57c-4951-87f3-5d00a0b20c84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ae08951c1dc9def0eee5eba75419261c567d64fe5493d80c75166750c46739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdbzw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j7fmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.547100 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d6e30cb-382e-4c57-bc62-8f3fab160965\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79aebfefd1a5781ac8b898f17f9575349ed9fe6b0893e5c500e3fe966a2c6a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a1f9abbe0ee31d54286fe33a15c4066444a9792debe7e2267eb1c419e8fc3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56b53dfdfaecca42de4bc0a715ea8abef0cd20b426d83133c2324e2bd1a0200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb10bc96526ffd641e4e99a5ec7259256c461565f0ee527dbfecdd0dd9286d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.567889 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ea871e377f0b5c6c9e076626ce9a748ba60b59000b82eaf973b0706b9e557a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.584960 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b9e9b240a442b8b5c5ebb323b2cff02e621ab80d900863f6a9fcbb368a2029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.603316 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.617270 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-54cn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674976ad-c787-440f-a8ab-98ebb4fd6d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65689e6249bb407295dfad22c1cfdf657edfd774003a8f29197e85cd7d88a53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvq6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54cn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.621725 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.621870 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.621890 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.621914 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.621931 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.640449 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-csl4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1288bab5-7372-4acc-963c-6232b27a7975\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:40Z\\\",\\\"message\\\":\\\"2025-10-14T06:47:55+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4\\\\n2025-10-14T06:47:55+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_91b5b1ac-b269-4ddf-831c-d78cddf678a4 to /host/opt/cni/bin/\\\\n2025-10-14T06:47:55Z [verbose] multus-daemon started\\\\n2025-10-14T06:47:55Z [verbose] Readiness Indicator file check\\\\n2025-10-14T06:48:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xpzq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-csl4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.661153 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd94b3a0-9632-4fcf-8bc7-2abb127bf11d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://291268e336ef46adb5ad4d229b09ed6ea330858d4449de535769e3bda630e0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d853ae19e9b49b23af1e2c1e3152177a076f7447435827597fc09987c88c327\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://735d32fe1127c818a21acf45f528684cee9009450c99051be64e7d526f0094b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab0b683e25e837035864fa85d03aeb93a0072d5eecdcb4fcda2a33321a5a3aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56f341e5c566e7608f2b5ae86ae528a24467adeb3c57b0cbfc72c29f331e2d2d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1014 06:47:46.377089 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 06:47:46.378787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023025780/tls.crt::/tmp/serving-cert-4023025780/tls.key\\\\\\\"\\\\nI1014 06:47:52.205616 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 06:47:52.209455 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 06:47:52.209487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 06:47:52.209522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 06:47:52.209532 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 06:47:52.222262 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1014 06:47:52.222302 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1014 06:47:52.222312 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 06:47:52.222354 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 06:47:52.222361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 06:47:52.222365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 06:47:52.222370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1014 06:47:52.225179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1f4b4d64e15b3480b726ae58502d0996fb8d0284e56ba5f6acdb2fe436d8b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3376f8ae20fe10fb467fdabf4872716231aac502cfeeafd2bc483407344b02a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.662107 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.662157 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.662174 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.662196 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.662216 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.680345 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06f9b0f-5b84-4aaa-8445-d30435039d07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8820e3702e3448cdd20234bd1cad7c023e50b26bc5a70412233965f682aabaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80ed79eb947b910cd2d9ff88b75dd70715f17a3b90eb6f86e31e5dde3f1254d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fe59c9963d1a4d80249143ffb335fabaae8ed420e4d4d98c54fe5b75fdd13d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76a405f530e8a233f7d91c9a9cde7d029d47eb5894c5399d631091c12e16461\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: E1014 06:49:00.683426 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.688665 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.688719 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.688737 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.688759 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.688778 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: E1014 06:49:00.709462 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.713349 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58308f56-cccd-4c52-89af-c23806a4769e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T06:48:58Z\\\",\\\"message\\\":\\\"812217 7132 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 06:48:58.812290 7132 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 06:48:58.812479 7132 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 06:48:58.812782 7132 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 06:48:58.812843 7132 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 06:48:58.812924 7132 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1014 06:48:58.813124 7132 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T06:48:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hlkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fw5vr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.715277 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.715332 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.715351 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.715376 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.715394 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.731776 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb601f8e-5c64-47af-ac59-4251c7ab625a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186b8f878b1b978be1ac687b5904637111d1c657107674d3a7b23ca09082ddfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2689de94f7ebb46210134e97e9890b06265f8117b451ce960aa4239f00f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:48:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmxkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9jpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: E1014 06:49:00.736290 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.741584 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.741640 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.741665 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.741698 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.741723 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.748552 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a70e631f-95b4-451e-821b-8b9297428934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:48:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hm26h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:48:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ckdsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: E1014 06:49:00.760372 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.762422 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f76cb49-bc77-4e8c-9793-45abd25399ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec3190c76ec9e141af1d35f78233b91af731bd38bcb4ed6da5413e0d4171b404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5fb7c244f242dd1d99b4fd2c24a96c10367ea8c432d07bde7b1569fae37c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f5fb7c244f242dd1d99b4fd2c24a96c10367ea8c432d07bde7b1569fae37c36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.765164 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.765207 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.765219 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.765235 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.765247 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.790786 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.790943 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:00 crc kubenswrapper[5058]: E1014 06:49:00.791098 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:00 crc kubenswrapper[5058]: E1014 06:49:00.791035 5058 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T06:49:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"df2d52db-1c59-470b-85f0-4c17f56af73f\\\",\\\"systemUUID\\\":\\\"0bd4897c-1e38-4562-b7ae-0d06c96681c4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: E1014 06:49:00.791273 5058 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 06:49:00 crc kubenswrapper[5058]: E1014 06:49:00.791530 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.793383 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.793423 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.793439 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.793483 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.793502 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.802258 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1fb4700-a30c-43f8-88ae-50dabcdbcdbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0996b2704792d75967f27e115e4f278946944c31b1d16d7437b4e17d46a1b7b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58913fe1249d3dc23058379e67034d0bda2ba73337619f09970e65d0ed0f79a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf8eb86d70352176e0cb4a1c257e7adb402b9fe0c2024a4efe5141a9218f5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531af8991f54034e97dc583c35df4cebb2e751a580608768f023b3b45cf6a2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29015c1a79d009cb84d6938ee4cd188266eab4321df9caa032712f4cfdcf6351\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbb5f541dc35d8aded743a8bc156fbbf5d332e7f5afaf41846ba09a95d4eaff0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9715982352f5f138b5d273b42e1f6c8cdb4580b1e8eba3c6a88a87ea4303042e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e29d41335889b78babc5ada24ccc332a7483defd6c050de3e99a098bb2c0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T06:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T06:47:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.820204 5058 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T06:47:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ee76aa0ed0719dc796795adecf4d3125e9b142a356a2b27380fdb84f354de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f9cfe0df966639dd226fe2f56d5e52e71d6ceaaa9c2f8a63e5c39e28437e07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T06:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T06:49:00Z is after 2025-08-24T17:21:41Z" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.896843 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.896918 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.896944 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.896974 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:00 crc kubenswrapper[5058]: I1014 06:49:00.896991 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:00Z","lastTransitionTime":"2025-10-14T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.000142 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.000196 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.000214 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.000269 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.000289 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.104278 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.104444 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.104471 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.104501 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.104522 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.207527 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.207615 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.207633 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.207658 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.207674 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.310723 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.310788 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.310852 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.310884 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.310906 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.413971 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.414039 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.414056 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.414087 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.414106 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.517700 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.517761 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.517778 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.517829 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.517850 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.621203 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.621273 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.621292 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.621322 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.621341 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.724665 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.724740 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.724763 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.724828 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.724848 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.789876 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.789925 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:01 crc kubenswrapper[5058]: E1014 06:49:01.790065 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:01 crc kubenswrapper[5058]: E1014 06:49:01.790546 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.828438 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.828537 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.828556 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.828612 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.828631 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.931165 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.931379 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.931396 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.931419 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:01 crc kubenswrapper[5058]: I1014 06:49:01.931436 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:01Z","lastTransitionTime":"2025-10-14T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.035124 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.035185 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.035203 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.035228 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.035252 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.138036 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.138100 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.138119 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.138142 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.138160 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.241327 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.241382 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.241398 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.241421 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.241438 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.344461 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.344526 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.344544 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.344569 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.344586 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.447272 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.447336 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.447355 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.447380 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.447400 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.550716 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.550772 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.550791 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.550847 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.550864 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.654194 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.654328 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.654352 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.654380 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.654498 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.757174 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.757229 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.757246 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.757269 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.757286 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.789123 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.789146 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:02 crc kubenswrapper[5058]: E1014 06:49:02.789283 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:02 crc kubenswrapper[5058]: E1014 06:49:02.789448 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.827544 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-54cn9" podStartSLOduration=69.827512631 podStartE2EDuration="1m9.827512631s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:02.827434378 +0000 UTC m=+90.738518204" watchObservedRunningTime="2025-10-14 06:49:02.827512631 +0000 UTC m=+90.738596487" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.846953 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-csl4q" podStartSLOduration=69.84692568 podStartE2EDuration="1m9.84692568s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:02.846295962 +0000 UTC m=+90.757379838" watchObservedRunningTime="2025-10-14 06:49:02.84692568 +0000 UTC m=+90.758009526" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.867469 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.867508 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.867516 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.867531 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.867540 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.878394 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hhxzz" podStartSLOduration=69.878378409 podStartE2EDuration="1m9.878378409s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:02.86884954 +0000 UTC m=+90.779933356" watchObservedRunningTime="2025-10-14 06:49:02.878378409 +0000 UTC m=+90.789462215" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.878501 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-j7fmm" podStartSLOduration=68.878497933 podStartE2EDuration="1m8.878497933s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:02.877946677 +0000 UTC m=+90.789030483" watchObservedRunningTime="2025-10-14 06:49:02.878497933 +0000 UTC m=+90.789581729" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.904079 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.904051495 podStartE2EDuration="35.904051495s" podCreationTimestamp="2025-10-14 06:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:02.890707598 +0000 UTC m=+90.801791414" watchObservedRunningTime="2025-10-14 06:49:02.904051495 +0000 UTC m=+90.815135341" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.969783 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.969833 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.969842 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.969856 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.969864 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:02Z","lastTransitionTime":"2025-10-14T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:02 crc kubenswrapper[5058]: I1014 06:49:02.976646 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9jpds" podStartSLOduration=68.976627308 podStartE2EDuration="1m8.976627308s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:02.944903671 +0000 UTC m=+90.855987477" watchObservedRunningTime="2025-10-14 06:49:02.976627308 +0000 UTC m=+90.887711114" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.012112 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.012090981 podStartE2EDuration="1m11.012090981s" podCreationTimestamp="2025-10-14 06:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:02.995341857 +0000 UTC m=+90.906425673" watchObservedRunningTime="2025-10-14 06:49:03.012090981 +0000 UTC m=+90.923174797" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.012268 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.012263216 podStartE2EDuration="1m5.012263216s" podCreationTimestamp="2025-10-14 06:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:03.011675819 +0000 UTC m=+90.922759625" watchObservedRunningTime="2025-10-14 06:49:03.012263216 +0000 UTC m=+90.923347032" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.045651 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.045633299 podStartE2EDuration="5.045633299s" podCreationTimestamp="2025-10-14 06:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:03.045604839 +0000 UTC m=+90.956688645" watchObservedRunningTime="2025-10-14 06:49:03.045633299 +0000 UTC m=+90.956717105" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.072566 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.072600 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.072610 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.072622 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.072631 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:03Z","lastTransitionTime":"2025-10-14T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.074773 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.074757113 podStartE2EDuration="1m11.074757113s" podCreationTimestamp="2025-10-14 06:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:03.074048213 +0000 UTC m=+90.985132029" watchObservedRunningTime="2025-10-14 06:49:03.074757113 +0000 UTC m=+90.985840919" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.126117 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podStartSLOduration=70.126091045 podStartE2EDuration="1m10.126091045s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:03.125447287 +0000 UTC m=+91.036531103" watchObservedRunningTime="2025-10-14 06:49:03.126091045 +0000 UTC m=+91.037174861" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.175322 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.175366 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.175375 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.175390 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.175398 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:03Z","lastTransitionTime":"2025-10-14T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.279141 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.279205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.279221 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.279245 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.279263 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:03Z","lastTransitionTime":"2025-10-14T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.381461 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.381517 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.381535 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.381557 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.381576 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:03Z","lastTransitionTime":"2025-10-14T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.485922 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.485981 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.486001 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.486025 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.486043 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:03Z","lastTransitionTime":"2025-10-14T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.588723 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.588783 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.588842 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.588874 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.588898 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:03Z","lastTransitionTime":"2025-10-14T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.691647 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.691708 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.691729 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.691757 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.691775 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:03Z","lastTransitionTime":"2025-10-14T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.790091 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.790121 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:03 crc kubenswrapper[5058]: E1014 06:49:03.790253 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:03 crc kubenswrapper[5058]: E1014 06:49:03.790304 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.794193 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.794262 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.794278 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.794301 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.794317 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:03Z","lastTransitionTime":"2025-10-14T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.897636 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.897709 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.897727 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.897755 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:03 crc kubenswrapper[5058]: I1014 06:49:03.897772 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:03Z","lastTransitionTime":"2025-10-14T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.001783 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.001897 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.001918 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.001944 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.001963 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.105063 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.105117 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.105133 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.105157 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.105173 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.208619 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.208681 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.208699 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.208721 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.208737 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.326390 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.326436 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.326451 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.326470 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.326482 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.428845 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.428898 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.428915 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.428938 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.428954 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.532554 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.532615 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.532632 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.532659 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.532677 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.636205 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.636269 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.636287 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.636314 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.636354 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.739619 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.739707 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.739731 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.739760 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.739777 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.789649 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:04 crc kubenswrapper[5058]: E1014 06:49:04.789880 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.789954 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:04 crc kubenswrapper[5058]: E1014 06:49:04.790137 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.843304 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.843363 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.843381 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.843431 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.843450 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.946507 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.946553 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.946564 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.946581 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:04 crc kubenswrapper[5058]: I1014 06:49:04.946596 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:04Z","lastTransitionTime":"2025-10-14T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.049601 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.049658 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.049673 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.049692 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.049707 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.152935 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.152994 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.153013 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.153039 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.153055 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.257186 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.257252 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.257268 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.257289 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.257304 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.359620 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.359679 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.359696 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.359727 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.359748 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.463180 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.463233 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.463252 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.463277 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.463295 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.566355 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.566414 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.566432 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.566457 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.566473 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.669369 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.669434 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.669452 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.669476 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.669493 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.772928 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.772990 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.773007 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.773029 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.773045 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.789074 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.789110 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:05 crc kubenswrapper[5058]: E1014 06:49:05.789509 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:05 crc kubenswrapper[5058]: E1014 06:49:05.789894 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.876181 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.876245 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.876293 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.876325 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.876347 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.979976 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.980038 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.980055 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.980080 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:05 crc kubenswrapper[5058]: I1014 06:49:05.980099 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:05Z","lastTransitionTime":"2025-10-14T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.083933 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.084020 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.084049 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.084084 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.084111 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:06Z","lastTransitionTime":"2025-10-14T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.186775 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.186861 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.186883 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.186912 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.186933 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:06Z","lastTransitionTime":"2025-10-14T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.289866 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.289975 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.290000 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.290027 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.290046 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:06Z","lastTransitionTime":"2025-10-14T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.393917 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.393987 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.394003 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.394031 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.394051 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:06Z","lastTransitionTime":"2025-10-14T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.497412 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.497653 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.497682 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.497713 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.497734 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:06Z","lastTransitionTime":"2025-10-14T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.600966 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.601029 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.601037 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.601069 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.601083 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:06Z","lastTransitionTime":"2025-10-14T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.703775 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.703844 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.703855 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.703872 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.703884 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:06Z","lastTransitionTime":"2025-10-14T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.789454 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:06 crc kubenswrapper[5058]: E1014 06:49:06.789591 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.789454 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:06 crc kubenswrapper[5058]: E1014 06:49:06.789672 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.806951 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.807016 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.807034 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.807057 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.807075 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:06Z","lastTransitionTime":"2025-10-14T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.909895 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.909956 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.909974 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.909997 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:06 crc kubenswrapper[5058]: I1014 06:49:06.910014 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:06Z","lastTransitionTime":"2025-10-14T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.012661 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.012704 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.012719 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.012738 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.012751 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.116970 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.117034 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.117057 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.117083 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.117103 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.220408 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.220475 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.220492 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.220516 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.220534 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.323563 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.323639 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.323663 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.323695 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.323717 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.427079 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.427139 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.427156 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.427179 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.427196 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.530466 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.530540 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.530559 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.530586 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.530606 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.634464 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.634536 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.634554 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.634581 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.634603 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.738545 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.738608 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.738624 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.738647 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.738663 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.789194 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:07 crc kubenswrapper[5058]: E1014 06:49:07.789337 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.789702 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:07 crc kubenswrapper[5058]: E1014 06:49:07.790021 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.842671 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.842738 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.842762 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.842790 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.842840 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.946043 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.946117 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.946143 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.946175 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:07 crc kubenswrapper[5058]: I1014 06:49:07.946201 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:07Z","lastTransitionTime":"2025-10-14T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.049911 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.049981 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.050002 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.050035 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.050054 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.153272 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.153355 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.153378 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.153415 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.153441 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.256374 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.256427 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.256442 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.256466 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.256489 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.360266 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.360334 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.360353 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.360382 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.360405 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.463885 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.463949 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.463964 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.463989 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.464005 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.567056 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.567120 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.567135 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.567159 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.567176 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.670889 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.670956 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.670990 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.671016 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.671035 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.774466 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.774553 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.774577 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.774612 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.774635 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.790029 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:08 crc kubenswrapper[5058]: E1014 06:49:08.790246 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.790368 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:08 crc kubenswrapper[5058]: E1014 06:49:08.790627 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.877990 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.878065 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.878085 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.878121 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.878143 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.981729 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.982019 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.982068 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.982095 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:08 crc kubenswrapper[5058]: I1014 06:49:08.982468 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:08Z","lastTransitionTime":"2025-10-14T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.084922 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.084943 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.084952 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.084963 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.084972 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:09Z","lastTransitionTime":"2025-10-14T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.188194 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.188250 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.188268 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.188290 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.188307 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:09Z","lastTransitionTime":"2025-10-14T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.291014 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.291071 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.291087 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.291111 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.291127 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:09Z","lastTransitionTime":"2025-10-14T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.395082 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.395230 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.395253 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.395285 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.395302 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:09Z","lastTransitionTime":"2025-10-14T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.498606 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.498680 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.498706 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.498740 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.498764 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:09Z","lastTransitionTime":"2025-10-14T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.601762 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.601824 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.601834 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.601852 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.601867 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:09Z","lastTransitionTime":"2025-10-14T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.704872 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.704933 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.704958 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.704989 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.705010 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:09Z","lastTransitionTime":"2025-10-14T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.788951 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.789020 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:09 crc kubenswrapper[5058]: E1014 06:49:09.789172 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:09 crc kubenswrapper[5058]: E1014 06:49:09.789316 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.808888 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.808948 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.808958 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.808981 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.808996 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:09Z","lastTransitionTime":"2025-10-14T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.912015 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.912088 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.912113 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.912141 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:09 crc kubenswrapper[5058]: I1014 06:49:09.912164 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:09Z","lastTransitionTime":"2025-10-14T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.015298 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.015342 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.015357 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.015377 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.015391 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.117952 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.117989 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.118009 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.118024 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.118036 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.221387 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.221427 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.221438 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.221454 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.221464 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.324891 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.324950 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.324975 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.325003 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.325023 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.428300 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.428368 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.428398 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.428430 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.428450 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.531689 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.531744 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.531760 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.531784 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.531827 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.635343 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.635401 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.635418 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.635442 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.635459 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.737939 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.737978 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.737989 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.738007 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.738020 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.789503 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.789517 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:10 crc kubenswrapper[5058]: E1014 06:49:10.789674 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:10 crc kubenswrapper[5058]: E1014 06:49:10.790058 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.841107 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.841157 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.841175 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.841198 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.841216 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.943704 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.943759 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.943775 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.943833 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:10 crc kubenswrapper[5058]: I1014 06:49:10.943852 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:10Z","lastTransitionTime":"2025-10-14T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.035483 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.035537 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.035555 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.035579 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.035596 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:11Z","lastTransitionTime":"2025-10-14T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.067351 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.067436 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.067459 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.067493 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.067516 5058 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T06:49:11Z","lastTransitionTime":"2025-10-14T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.112497 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm"] Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.113092 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.115888 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.116406 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.116427 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.116751 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.154245 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d988968a-cbcb-4af2-91b2-114e459ddd4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.154312 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d988968a-cbcb-4af2-91b2-114e459ddd4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.154352 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d988968a-cbcb-4af2-91b2-114e459ddd4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.154383 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d988968a-cbcb-4af2-91b2-114e459ddd4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.154420 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d988968a-cbcb-4af2-91b2-114e459ddd4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.255471 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d988968a-cbcb-4af2-91b2-114e459ddd4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.255536 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d988968a-cbcb-4af2-91b2-114e459ddd4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.255570 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d988968a-cbcb-4af2-91b2-114e459ddd4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.255608 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d988968a-cbcb-4af2-91b2-114e459ddd4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.255710 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d988968a-cbcb-4af2-91b2-114e459ddd4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.255750 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d988968a-cbcb-4af2-91b2-114e459ddd4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.255953 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d988968a-cbcb-4af2-91b2-114e459ddd4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.257285 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d988968a-cbcb-4af2-91b2-114e459ddd4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.264663 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d988968a-cbcb-4af2-91b2-114e459ddd4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.288079 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d988968a-cbcb-4af2-91b2-114e459ddd4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-frdxm\" (UID: \"d988968a-cbcb-4af2-91b2-114e459ddd4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.443052 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.457655 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:11 crc kubenswrapper[5058]: E1014 06:49:11.457884 5058 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:49:11 crc kubenswrapper[5058]: E1014 06:49:11.457967 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs podName:a70e631f-95b4-451e-821b-8b9297428934 nodeName:}" failed. No retries permitted until 2025-10-14 06:50:15.45794455 +0000 UTC m=+163.369028386 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs") pod "network-metrics-daemon-ckdsj" (UID: "a70e631f-95b4-451e-821b-8b9297428934") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.788901 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:11 crc kubenswrapper[5058]: E1014 06:49:11.789246 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:11 crc kubenswrapper[5058]: I1014 06:49:11.789294 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:11 crc kubenswrapper[5058]: E1014 06:49:11.789607 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:12 crc kubenswrapper[5058]: I1014 06:49:12.465541 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" event={"ID":"d988968a-cbcb-4af2-91b2-114e459ddd4d","Type":"ContainerStarted","Data":"700732b288518dccfb65846846c095b9becb1f458732778e8c8024712b7ec263"} Oct 14 06:49:12 crc kubenswrapper[5058]: I1014 06:49:12.465616 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" event={"ID":"d988968a-cbcb-4af2-91b2-114e459ddd4d","Type":"ContainerStarted","Data":"246c7296ccd6fe2d05451b96b1635b5776a2ddba88be30436ae65caf2e796beb"} Oct 14 06:49:12 crc kubenswrapper[5058]: I1014 06:49:12.487171 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-frdxm" podStartSLOduration=79.487147586 podStartE2EDuration="1m19.487147586s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:12.48656997 +0000 UTC m=+100.397653836" watchObservedRunningTime="2025-10-14 06:49:12.487147586 +0000 UTC m=+100.398231402" Oct 14 06:49:12 crc kubenswrapper[5058]: I1014 06:49:12.789847 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:12 crc kubenswrapper[5058]: I1014 06:49:12.789990 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:12 crc kubenswrapper[5058]: E1014 06:49:12.792231 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:12 crc kubenswrapper[5058]: E1014 06:49:12.792405 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:13 crc kubenswrapper[5058]: I1014 06:49:13.788942 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:13 crc kubenswrapper[5058]: I1014 06:49:13.789124 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:13 crc kubenswrapper[5058]: E1014 06:49:13.789199 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:13 crc kubenswrapper[5058]: E1014 06:49:13.789885 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:13 crc kubenswrapper[5058]: I1014 06:49:13.790339 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:49:13 crc kubenswrapper[5058]: E1014 06:49:13.790610 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" Oct 14 06:49:14 crc kubenswrapper[5058]: I1014 06:49:14.789034 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:14 crc kubenswrapper[5058]: I1014 06:49:14.789189 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:14 crc kubenswrapper[5058]: E1014 06:49:14.789537 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:14 crc kubenswrapper[5058]: E1014 06:49:14.790161 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:15 crc kubenswrapper[5058]: I1014 06:49:15.789415 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:15 crc kubenswrapper[5058]: I1014 06:49:15.789434 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:15 crc kubenswrapper[5058]: E1014 06:49:15.789666 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:15 crc kubenswrapper[5058]: E1014 06:49:15.789765 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:16 crc kubenswrapper[5058]: I1014 06:49:16.789196 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:16 crc kubenswrapper[5058]: I1014 06:49:16.789348 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:16 crc kubenswrapper[5058]: E1014 06:49:16.790106 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:16 crc kubenswrapper[5058]: E1014 06:49:16.789619 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:17 crc kubenswrapper[5058]: I1014 06:49:17.789347 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:17 crc kubenswrapper[5058]: I1014 06:49:17.789408 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:17 crc kubenswrapper[5058]: E1014 06:49:17.790211 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:17 crc kubenswrapper[5058]: E1014 06:49:17.790369 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:18 crc kubenswrapper[5058]: I1014 06:49:18.789381 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:18 crc kubenswrapper[5058]: E1014 06:49:18.789578 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:18 crc kubenswrapper[5058]: I1014 06:49:18.789619 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:18 crc kubenswrapper[5058]: E1014 06:49:18.790081 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:19 crc kubenswrapper[5058]: I1014 06:49:19.789740 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:19 crc kubenswrapper[5058]: E1014 06:49:19.789947 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:19 crc kubenswrapper[5058]: I1014 06:49:19.790179 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:19 crc kubenswrapper[5058]: E1014 06:49:19.790520 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:20 crc kubenswrapper[5058]: I1014 06:49:20.789754 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:20 crc kubenswrapper[5058]: I1014 06:49:20.789765 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:20 crc kubenswrapper[5058]: E1014 06:49:20.789968 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:20 crc kubenswrapper[5058]: E1014 06:49:20.790133 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:21 crc kubenswrapper[5058]: I1014 06:49:21.789938 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:21 crc kubenswrapper[5058]: E1014 06:49:21.790135 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:21 crc kubenswrapper[5058]: I1014 06:49:21.789954 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:21 crc kubenswrapper[5058]: E1014 06:49:21.790627 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:22 crc kubenswrapper[5058]: I1014 06:49:22.789671 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:22 crc kubenswrapper[5058]: I1014 06:49:22.789716 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:22 crc kubenswrapper[5058]: E1014 06:49:22.791380 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:22 crc kubenswrapper[5058]: E1014 06:49:22.791582 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:23 crc kubenswrapper[5058]: I1014 06:49:23.789853 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:23 crc kubenswrapper[5058]: I1014 06:49:23.789903 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:23 crc kubenswrapper[5058]: E1014 06:49:23.790027 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:23 crc kubenswrapper[5058]: E1014 06:49:23.790193 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:24 crc kubenswrapper[5058]: I1014 06:49:24.789104 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:24 crc kubenswrapper[5058]: I1014 06:49:24.789179 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:24 crc kubenswrapper[5058]: E1014 06:49:24.789255 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:24 crc kubenswrapper[5058]: E1014 06:49:24.789333 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:25 crc kubenswrapper[5058]: I1014 06:49:25.789274 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:25 crc kubenswrapper[5058]: I1014 06:49:25.789295 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:25 crc kubenswrapper[5058]: E1014 06:49:25.789461 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:25 crc kubenswrapper[5058]: E1014 06:49:25.789540 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:26 crc kubenswrapper[5058]: I1014 06:49:26.789302 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:26 crc kubenswrapper[5058]: E1014 06:49:26.789519 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:26 crc kubenswrapper[5058]: I1014 06:49:26.789635 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:26 crc kubenswrapper[5058]: E1014 06:49:26.789747 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:26 crc kubenswrapper[5058]: I1014 06:49:26.790636 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:49:26 crc kubenswrapper[5058]: E1014 06:49:26.790826 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fw5vr_openshift-ovn-kubernetes(58308f56-cccd-4c52-89af-c23806a4769e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" Oct 14 06:49:27 crc kubenswrapper[5058]: I1014 06:49:27.525190 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/1.log" Oct 14 06:49:27 crc kubenswrapper[5058]: I1014 06:49:27.526359 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/0.log" Oct 14 06:49:27 crc kubenswrapper[5058]: I1014 06:49:27.526437 5058 generic.go:334] "Generic (PLEG): container finished" podID="1288bab5-7372-4acc-963c-6232b27a7975" containerID="a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948" exitCode=1 Oct 14 06:49:27 crc kubenswrapper[5058]: I1014 06:49:27.526502 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csl4q" event={"ID":"1288bab5-7372-4acc-963c-6232b27a7975","Type":"ContainerDied","Data":"a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948"} Oct 14 06:49:27 crc kubenswrapper[5058]: I1014 06:49:27.526587 5058 scope.go:117] "RemoveContainer" containerID="4b150ac8f87de35be644b4483c170f5135f1a8cb6c658f03c7e4e801fbcfbe23" Oct 14 06:49:27 crc kubenswrapper[5058]: I1014 06:49:27.527207 5058 scope.go:117] "RemoveContainer" containerID="a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948" Oct 14 06:49:27 crc kubenswrapper[5058]: E1014 06:49:27.527539 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-csl4q_openshift-multus(1288bab5-7372-4acc-963c-6232b27a7975)\"" pod="openshift-multus/multus-csl4q" podUID="1288bab5-7372-4acc-963c-6232b27a7975" Oct 14 06:49:27 crc kubenswrapper[5058]: I1014 06:49:27.789667 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:27 crc kubenswrapper[5058]: I1014 06:49:27.789764 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:27 crc kubenswrapper[5058]: E1014 06:49:27.790966 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:27 crc kubenswrapper[5058]: E1014 06:49:27.791036 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:28 crc kubenswrapper[5058]: I1014 06:49:28.531024 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/1.log" Oct 14 06:49:28 crc kubenswrapper[5058]: I1014 06:49:28.789044 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:28 crc kubenswrapper[5058]: I1014 06:49:28.789121 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:28 crc kubenswrapper[5058]: E1014 06:49:28.789200 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:28 crc kubenswrapper[5058]: E1014 06:49:28.789255 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:29 crc kubenswrapper[5058]: I1014 06:49:29.789190 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:29 crc kubenswrapper[5058]: I1014 06:49:29.789573 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:29 crc kubenswrapper[5058]: E1014 06:49:29.789747 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:29 crc kubenswrapper[5058]: E1014 06:49:29.789982 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:30 crc kubenswrapper[5058]: I1014 06:49:30.789391 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:30 crc kubenswrapper[5058]: I1014 06:49:30.789431 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:30 crc kubenswrapper[5058]: E1014 06:49:30.789558 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:30 crc kubenswrapper[5058]: E1014 06:49:30.789703 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:31 crc kubenswrapper[5058]: I1014 06:49:31.789001 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:31 crc kubenswrapper[5058]: I1014 06:49:31.789059 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:31 crc kubenswrapper[5058]: E1014 06:49:31.789185 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:31 crc kubenswrapper[5058]: E1014 06:49:31.789560 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:32 crc kubenswrapper[5058]: I1014 06:49:32.789971 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:32 crc kubenswrapper[5058]: I1014 06:49:32.790875 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:32 crc kubenswrapper[5058]: E1014 06:49:32.791848 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:32 crc kubenswrapper[5058]: E1014 06:49:32.791958 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:32 crc kubenswrapper[5058]: E1014 06:49:32.798390 5058 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 14 06:49:32 crc kubenswrapper[5058]: E1014 06:49:32.911725 5058 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 06:49:33 crc kubenswrapper[5058]: I1014 06:49:33.788864 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:33 crc kubenswrapper[5058]: I1014 06:49:33.788923 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:33 crc kubenswrapper[5058]: E1014 06:49:33.789003 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:33 crc kubenswrapper[5058]: E1014 06:49:33.789084 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:34 crc kubenswrapper[5058]: I1014 06:49:34.789255 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:34 crc kubenswrapper[5058]: I1014 06:49:34.789306 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:34 crc kubenswrapper[5058]: E1014 06:49:34.789587 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:34 crc kubenswrapper[5058]: E1014 06:49:34.789724 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:35 crc kubenswrapper[5058]: I1014 06:49:35.789969 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:35 crc kubenswrapper[5058]: I1014 06:49:35.790065 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:35 crc kubenswrapper[5058]: E1014 06:49:35.790144 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:35 crc kubenswrapper[5058]: E1014 06:49:35.790251 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:36 crc kubenswrapper[5058]: I1014 06:49:36.789518 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:36 crc kubenswrapper[5058]: I1014 06:49:36.789594 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:36 crc kubenswrapper[5058]: E1014 06:49:36.789743 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:36 crc kubenswrapper[5058]: E1014 06:49:36.789970 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:37 crc kubenswrapper[5058]: I1014 06:49:37.789772 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:37 crc kubenswrapper[5058]: I1014 06:49:37.789787 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:37 crc kubenswrapper[5058]: E1014 06:49:37.790051 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:37 crc kubenswrapper[5058]: E1014 06:49:37.790197 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:37 crc kubenswrapper[5058]: E1014 06:49:37.913333 5058 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 06:49:38 crc kubenswrapper[5058]: I1014 06:49:38.789686 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:38 crc kubenswrapper[5058]: I1014 06:49:38.789783 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:38 crc kubenswrapper[5058]: E1014 06:49:38.789904 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:38 crc kubenswrapper[5058]: E1014 06:49:38.790020 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:38 crc kubenswrapper[5058]: I1014 06:49:38.790521 5058 scope.go:117] "RemoveContainer" containerID="a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948" Oct 14 06:49:39 crc kubenswrapper[5058]: I1014 06:49:39.572242 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/1.log" Oct 14 06:49:39 crc kubenswrapper[5058]: I1014 06:49:39.572634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csl4q" event={"ID":"1288bab5-7372-4acc-963c-6232b27a7975","Type":"ContainerStarted","Data":"eb987a24e03365652527e60dceb65eb0041958bd107363d333033b2b6cb45081"} Oct 14 06:49:39 crc kubenswrapper[5058]: I1014 06:49:39.790652 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:39 crc kubenswrapper[5058]: I1014 06:49:39.790724 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:39 crc kubenswrapper[5058]: E1014 06:49:39.790865 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:39 crc kubenswrapper[5058]: I1014 06:49:39.791148 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:49:39 crc kubenswrapper[5058]: E1014 06:49:39.791166 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:40 crc kubenswrapper[5058]: I1014 06:49:40.579104 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/3.log" Oct 14 06:49:40 crc kubenswrapper[5058]: I1014 06:49:40.583554 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerStarted","Data":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} Oct 14 06:49:40 crc kubenswrapper[5058]: I1014 06:49:40.584158 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:49:40 crc kubenswrapper[5058]: I1014 06:49:40.629121 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podStartSLOduration=107.62909053999999 podStartE2EDuration="1m47.62909054s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:40.628448052 +0000 UTC m=+128.539531898" watchObservedRunningTime="2025-10-14 06:49:40.62909054 +0000 UTC m=+128.540174436" Oct 14 06:49:40 crc kubenswrapper[5058]: I1014 06:49:40.676773 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ckdsj"] Oct 14 06:49:40 crc kubenswrapper[5058]: I1014 06:49:40.676946 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:40 crc kubenswrapper[5058]: E1014 06:49:40.677110 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:40 crc kubenswrapper[5058]: I1014 06:49:40.789434 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:40 crc kubenswrapper[5058]: I1014 06:49:40.789498 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:40 crc kubenswrapper[5058]: E1014 06:49:40.789565 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:40 crc kubenswrapper[5058]: E1014 06:49:40.789681 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:41 crc kubenswrapper[5058]: I1014 06:49:41.789915 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:41 crc kubenswrapper[5058]: E1014 06:49:41.790515 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 06:49:42 crc kubenswrapper[5058]: I1014 06:49:42.790180 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:42 crc kubenswrapper[5058]: E1014 06:49:42.791965 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 06:49:42 crc kubenswrapper[5058]: I1014 06:49:42.792072 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:42 crc kubenswrapper[5058]: E1014 06:49:42.792269 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ckdsj" podUID="a70e631f-95b4-451e-821b-8b9297428934" Oct 14 06:49:42 crc kubenswrapper[5058]: I1014 06:49:42.792082 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:42 crc kubenswrapper[5058]: E1014 06:49:42.792416 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 06:49:43 crc kubenswrapper[5058]: I1014 06:49:43.789629 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:49:43 crc kubenswrapper[5058]: I1014 06:49:43.793999 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 14 06:49:43 crc kubenswrapper[5058]: I1014 06:49:43.796117 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 14 06:49:44 crc kubenswrapper[5058]: I1014 06:49:44.789623 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:49:44 crc kubenswrapper[5058]: I1014 06:49:44.789707 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:49:44 crc kubenswrapper[5058]: I1014 06:49:44.789623 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:49:44 crc kubenswrapper[5058]: I1014 06:49:44.793450 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 14 06:49:44 crc kubenswrapper[5058]: I1014 06:49:44.793451 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 14 06:49:44 crc kubenswrapper[5058]: I1014 06:49:44.793871 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 14 06:49:44 crc kubenswrapper[5058]: I1014 06:49:44.794165 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.318844 5058 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.386419 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.387342 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.388747 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.389221 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.395656 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: W1014 06:49:52.396058 5058 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Oct 14 06:49:52 crc kubenswrapper[5058]: W1014 06:49:52.396134 5058 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Oct 14 06:49:52 crc kubenswrapper[5058]: E1014 06:49:52.396218 5058 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 06:49:52 crc kubenswrapper[5058]: E1014 06:49:52.396132 5058 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 06:49:52 crc kubenswrapper[5058]: W1014 06:49:52.396482 5058 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Oct 14 06:49:52 crc kubenswrapper[5058]: E1014 06:49:52.396533 5058 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.398779 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mdp6d"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.400357 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: W1014 06:49:52.400501 5058 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Oct 14 06:49:52 crc kubenswrapper[5058]: E1014 06:49:52.400546 5058 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.401017 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.401744 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.403320 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.404085 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.405452 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fhbms"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.406138 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.406998 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.412667 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.413036 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.413286 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.413515 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.413979 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.414039 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.414272 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.414297 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.414506 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.414578 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.414711 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-92kq5"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.414772 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.415316 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.415471 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.415613 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.415638 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.415731 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.415750 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.416047 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.416273 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.416348 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.417058 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.420267 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.420538 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.420715 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.420856 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m4grl"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.421584 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qhldv"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.421955 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.422212 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.428582 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.428692 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.428969 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.428993 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.429244 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.429869 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.430239 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9snxr"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.430654 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.430754 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.430910 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.431087 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.431423 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.432741 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.437830 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.439558 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.440235 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.445650 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.464406 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465083 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465323 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvp67\" (UniqueName: \"kubernetes.io/projected/f05623c8-3bf6-44d3-a522-cb1175227bf4-kube-api-access-nvp67\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465524 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klwbg\" (UniqueName: \"kubernetes.io/projected/00fc1c84-bcbb-4986-8096-f3fe338d7955-kube-api-access-klwbg\") pod \"openshift-controller-manager-operator-756b6f6bc6-9hl8v\" (UID: \"00fc1c84-bcbb-4986-8096-f3fe338d7955\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465560 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465571 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8be2923f-db3e-40bf-b821-27e0d8ab84ae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-92kq5\" (UID: \"8be2923f-db3e-40bf-b821-27e0d8ab84ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465638 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-config\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465660 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00fc1c84-bcbb-4986-8096-f3fe338d7955-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9hl8v\" (UID: \"00fc1c84-bcbb-4986-8096-f3fe338d7955\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465700 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-client-ca\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465730 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fccdb01a-b512-49e4-9b8f-9a777aea21b9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465756 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhnpv\" (UniqueName: \"kubernetes.io/projected/5be0f431-a32d-4add-b45a-a176ecdf6eb2-kube-api-access-zhnpv\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465790 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465819 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6773ec1d-4c68-45d1-ac48-50a04ece6c27-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kxhxc\" (UID: \"6773ec1d-4c68-45d1-ac48-50a04ece6c27\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465844 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be2923f-db3e-40bf-b821-27e0d8ab84ae-serving-cert\") pod \"openshift-config-operator-7777fb866f-92kq5\" (UID: \"8be2923f-db3e-40bf-b821-27e0d8ab84ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465860 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-trusted-ca-bundle\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465882 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fccdb01a-b512-49e4-9b8f-9a777aea21b9-encryption-config\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465902 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fccdb01a-b512-49e4-9b8f-9a777aea21b9-audit-dir\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465922 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-serving-cert\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.465982 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-oauth-config\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466004 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5be0f431-a32d-4add-b45a-a176ecdf6eb2-auth-proxy-config\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466030 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-serving-cert\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466107 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00fc1c84-bcbb-4986-8096-f3fe338d7955-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9hl8v\" (UID: \"00fc1c84-bcbb-4986-8096-f3fe338d7955\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466149 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e09468c0-bfc1-42fe-918d-4068041806c0-serving-cert\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466197 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t584j\" (UniqueName: \"kubernetes.io/projected/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-kube-api-access-t584j\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466220 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-config\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466239 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fccdb01a-b512-49e4-9b8f-9a777aea21b9-etcd-client\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466258 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5be0f431-a32d-4add-b45a-a176ecdf6eb2-machine-approver-tls\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466291 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-service-ca\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466307 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fccdb01a-b512-49e4-9b8f-9a777aea21b9-audit-policies\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466330 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fccdb01a-b512-49e4-9b8f-9a777aea21b9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466394 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466417 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-client-ca\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466434 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtz9q\" (UniqueName: \"kubernetes.io/projected/8be2923f-db3e-40bf-b821-27e0d8ab84ae-kube-api-access-jtz9q\") pod \"openshift-config-operator-7777fb866f-92kq5\" (UID: \"8be2923f-db3e-40bf-b821-27e0d8ab84ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466457 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p798j\" (UniqueName: \"kubernetes.io/projected/fccdb01a-b512-49e4-9b8f-9a777aea21b9-kube-api-access-p798j\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466477 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqxfd\" (UniqueName: \"kubernetes.io/projected/e09468c0-bfc1-42fe-918d-4068041806c0-kube-api-access-gqxfd\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466515 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-oauth-serving-cert\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466552 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fccdb01a-b512-49e4-9b8f-9a777aea21b9-serving-cert\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466571 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-config\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466610 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7rr\" (UniqueName: \"kubernetes.io/projected/6773ec1d-4c68-45d1-ac48-50a04ece6c27-kube-api-access-zd7rr\") pod \"cluster-samples-operator-665b6dd947-kxhxc\" (UID: \"6773ec1d-4c68-45d1-ac48-50a04ece6c27\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.466651 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be0f431-a32d-4add-b45a-a176ecdf6eb2-config\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.467644 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.470155 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.484619 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.484722 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.484750 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.484823 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.484877 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.484921 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.484973 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.485027 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.485058 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.485185 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-248mp"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.485828 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-htb67"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.485903 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486000 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486027 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486126 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486237 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-htb67" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486343 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486399 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486034 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486553 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486646 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486736 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.486885 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487071 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487187 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487281 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487357 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487427 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487510 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487579 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487624 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487585 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487704 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.487763 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.490381 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.490513 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.490593 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.490672 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.491260 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.491366 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.491611 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.491692 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.491761 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.491853 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.491993 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492127 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492200 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492280 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492334 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492387 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492426 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492509 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492391 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492589 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492611 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.492659 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.494616 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.494824 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-khhmg"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.495483 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.495661 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.495693 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.495944 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.496952 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.498313 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7d7q2"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.498759 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.499069 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.506162 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.506879 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.513114 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hzzx7"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.513941 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.514893 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.515832 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.517076 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rnqgw"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.519250 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.520432 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pkv6l"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.520605 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.520622 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.521255 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.521615 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.522096 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.522160 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.522627 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.522958 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.528136 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.548715 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.548817 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.549030 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.549152 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.551459 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.551824 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.552421 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.552969 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.561171 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.561541 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.561908 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5g4zj"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.562718 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.563594 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567471 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7b3707a5-688b-4f49-ba57-4159b9809f0c-metrics-tls\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567515 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-service-ca\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567537 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fccdb01a-b512-49e4-9b8f-9a777aea21b9-audit-policies\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567557 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4559\" (UniqueName: \"kubernetes.io/projected/7b3707a5-688b-4f49-ba57-4159b9809f0c-kube-api-access-b4559\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567575 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fccdb01a-b512-49e4-9b8f-9a777aea21b9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567591 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3707a5-688b-4f49-ba57-4159b9809f0c-trusted-ca\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567607 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b3707a5-688b-4f49-ba57-4159b9809f0c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567626 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567642 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92274d4b-dd07-43a9-b763-39052f342669-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfggz\" (UID: \"92274d4b-dd07-43a9-b763-39052f342669\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567659 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33215a89-847d-416d-9121-ad7a4e3ae01c-config\") pod \"kube-apiserver-operator-766d6c64bb-jl7h6\" (UID: \"33215a89-847d-416d-9121-ad7a4e3ae01c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567676 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-client-ca\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567693 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3abe1bf-4556-435c-9aa1-3f9a997f3842-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qh79q\" (UID: \"b3abe1bf-4556-435c-9aa1-3f9a997f3842\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567711 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtz9q\" (UniqueName: \"kubernetes.io/projected/8be2923f-db3e-40bf-b821-27e0d8ab84ae-kube-api-access-jtz9q\") pod \"openshift-config-operator-7777fb866f-92kq5\" (UID: \"8be2923f-db3e-40bf-b821-27e0d8ab84ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567729 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqxfd\" (UniqueName: \"kubernetes.io/projected/e09468c0-bfc1-42fe-918d-4068041806c0-kube-api-access-gqxfd\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567744 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-oauth-serving-cert\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567759 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fccdb01a-b512-49e4-9b8f-9a777aea21b9-serving-cert\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567774 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p798j\" (UniqueName: \"kubernetes.io/projected/fccdb01a-b512-49e4-9b8f-9a777aea21b9-kube-api-access-p798j\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567794 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z5vl\" (UniqueName: \"kubernetes.io/projected/b3abe1bf-4556-435c-9aa1-3f9a997f3842-kube-api-access-2z5vl\") pod \"olm-operator-6b444d44fb-qh79q\" (UID: \"b3abe1bf-4556-435c-9aa1-3f9a997f3842\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567831 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-config\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567851 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd7rr\" (UniqueName: \"kubernetes.io/projected/6773ec1d-4c68-45d1-ac48-50a04ece6c27-kube-api-access-zd7rr\") pod \"cluster-samples-operator-665b6dd947-kxhxc\" (UID: \"6773ec1d-4c68-45d1-ac48-50a04ece6c27\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567871 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60860626-d1e5-475c-a7aa-ac2a2f1e1593-config\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567889 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be0f431-a32d-4add-b45a-a176ecdf6eb2-config\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567908 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvp67\" (UniqueName: \"kubernetes.io/projected/f05623c8-3bf6-44d3-a522-cb1175227bf4-kube-api-access-nvp67\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567924 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klwbg\" (UniqueName: \"kubernetes.io/projected/00fc1c84-bcbb-4986-8096-f3fe338d7955-kube-api-access-klwbg\") pod \"openshift-controller-manager-operator-756b6f6bc6-9hl8v\" (UID: \"00fc1c84-bcbb-4986-8096-f3fe338d7955\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567941 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8be2923f-db3e-40bf-b821-27e0d8ab84ae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-92kq5\" (UID: \"8be2923f-db3e-40bf-b821-27e0d8ab84ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567960 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33215a89-847d-416d-9121-ad7a4e3ae01c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jl7h6\" (UID: \"33215a89-847d-416d-9121-ad7a4e3ae01c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.567991 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-config\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.568007 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92274d4b-dd07-43a9-b763-39052f342669-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfggz\" (UID: \"92274d4b-dd07-43a9-b763-39052f342669\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.568026 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00fc1c84-bcbb-4986-8096-f3fe338d7955-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9hl8v\" (UID: \"00fc1c84-bcbb-4986-8096-f3fe338d7955\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.568049 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-client-ca\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.568307 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fccdb01a-b512-49e4-9b8f-9a777aea21b9-audit-policies\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.568418 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fccdb01a-b512-49e4-9b8f-9a777aea21b9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.568487 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.568089 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60860626-d1e5-475c-a7aa-ac2a2f1e1593-serving-cert\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.569157 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fccdb01a-b512-49e4-9b8f-9a777aea21b9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.569167 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-service-ca\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.569192 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570008 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-oauth-serving-cert\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570009 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be0f431-a32d-4add-b45a-a176ecdf6eb2-config\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570009 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570106 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-config\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570158 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhnpv\" (UniqueName: \"kubernetes.io/projected/5be0f431-a32d-4add-b45a-a176ecdf6eb2-kube-api-access-zhnpv\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570166 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-config\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570196 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6773ec1d-4c68-45d1-ac48-50a04ece6c27-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kxhxc\" (UID: \"6773ec1d-4c68-45d1-ac48-50a04ece6c27\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570218 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jmtg\" (UniqueName: \"kubernetes.io/projected/92274d4b-dd07-43a9-b763-39052f342669-kube-api-access-6jmtg\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfggz\" (UID: \"92274d4b-dd07-43a9-b763-39052f342669\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570247 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3abe1bf-4556-435c-9aa1-3f9a997f3842-srv-cert\") pod \"olm-operator-6b444d44fb-qh79q\" (UID: \"b3abe1bf-4556-435c-9aa1-3f9a997f3842\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570281 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be2923f-db3e-40bf-b821-27e0d8ab84ae-serving-cert\") pod \"openshift-config-operator-7777fb866f-92kq5\" (UID: \"8be2923f-db3e-40bf-b821-27e0d8ab84ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570279 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8be2923f-db3e-40bf-b821-27e0d8ab84ae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-92kq5\" (UID: \"8be2923f-db3e-40bf-b821-27e0d8ab84ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570311 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-trusted-ca-bundle\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570327 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570329 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33215a89-847d-416d-9121-ad7a4e3ae01c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jl7h6\" (UID: \"33215a89-847d-416d-9121-ad7a4e3ae01c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570540 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fccdb01a-b512-49e4-9b8f-9a777aea21b9-encryption-config\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570591 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60860626-d1e5-475c-a7aa-ac2a2f1e1593-trusted-ca\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570693 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fccdb01a-b512-49e4-9b8f-9a777aea21b9-audit-dir\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570723 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-serving-cert\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.570752 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbm4\" (UniqueName: \"kubernetes.io/projected/60860626-d1e5-475c-a7aa-ac2a2f1e1593-kube-api-access-rpbm4\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.571336 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-trusted-ca-bundle\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.571482 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5be0f431-a32d-4add-b45a-a176ecdf6eb2-auth-proxy-config\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572206 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5be0f431-a32d-4add-b45a-a176ecdf6eb2-auth-proxy-config\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572269 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-serving-cert\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572341 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-oauth-config\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572563 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fccdb01a-b512-49e4-9b8f-9a777aea21b9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572590 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-client-ca\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572649 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00fc1c84-bcbb-4986-8096-f3fe338d7955-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9hl8v\" (UID: \"00fc1c84-bcbb-4986-8096-f3fe338d7955\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572669 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e09468c0-bfc1-42fe-918d-4068041806c0-serving-cert\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572711 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t584j\" (UniqueName: \"kubernetes.io/projected/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-kube-api-access-t584j\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572748 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-config\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572764 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fccdb01a-b512-49e4-9b8f-9a777aea21b9-etcd-client\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.572782 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5be0f431-a32d-4add-b45a-a176ecdf6eb2-machine-approver-tls\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.575002 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fccdb01a-b512-49e4-9b8f-9a777aea21b9-serving-cert\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.575187 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-serving-cert\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.575189 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fccdb01a-b512-49e4-9b8f-9a777aea21b9-encryption-config\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.575221 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fccdb01a-b512-49e4-9b8f-9a777aea21b9-audit-dir\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.575324 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-oauth-config\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.575727 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00fc1c84-bcbb-4986-8096-f3fe338d7955-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9hl8v\" (UID: \"00fc1c84-bcbb-4986-8096-f3fe338d7955\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.578513 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.579758 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.580396 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.585965 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be2923f-db3e-40bf-b821-27e0d8ab84ae-serving-cert\") pod \"openshift-config-operator-7777fb866f-92kq5\" (UID: \"8be2923f-db3e-40bf-b821-27e0d8ab84ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.586237 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fccdb01a-b512-49e4-9b8f-9a777aea21b9-etcd-client\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.586357 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00fc1c84-bcbb-4986-8096-f3fe338d7955-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9hl8v\" (UID: \"00fc1c84-bcbb-4986-8096-f3fe338d7955\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.586423 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e09468c0-bfc1-42fe-918d-4068041806c0-serving-cert\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.589374 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.590658 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.591949 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.592593 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s5659"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.593119 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.593197 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.598721 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5be0f431-a32d-4add-b45a-a176ecdf6eb2-machine-approver-tls\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.602173 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.602734 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.603300 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n4mdl"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.603651 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.603871 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6773ec1d-4c68-45d1-ac48-50a04ece6c27-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kxhxc\" (UID: \"6773ec1d-4c68-45d1-ac48-50a04ece6c27\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.604854 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.608145 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.608639 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-92kq5"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.608660 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-j42gk"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.609182 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.609393 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.610588 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.611031 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.611144 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.612186 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mdp6d"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.614442 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m4grl"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.614711 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.615904 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.616786 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fhbms"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.617878 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9snxr"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.620176 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.622091 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qhldv"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.622736 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.624338 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.625971 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.626984 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-khhmg"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.627978 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-htb67"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.629059 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.629880 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.630622 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.631763 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.633205 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5g4zj"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.634875 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j925t"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.639276 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.639785 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.642914 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.644202 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hzzx7"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.645458 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.646610 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rnqgw"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.647588 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7d7q2"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.648739 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s5659"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.649550 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.649893 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.651264 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j42gk"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.652606 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.653694 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j925t"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.654736 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.655851 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.656924 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-248mp"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.658233 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.659171 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n4mdl"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.660213 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tbg6n"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.660708 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tbg6n" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.661279 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qq72t"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.661880 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.662705 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tbg6n"] Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.670113 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673428 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60860626-d1e5-475c-a7aa-ac2a2f1e1593-trusted-ca\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673458 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpbm4\" (UniqueName: \"kubernetes.io/projected/60860626-d1e5-475c-a7aa-ac2a2f1e1593-kube-api-access-rpbm4\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673504 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7b3707a5-688b-4f49-ba57-4159b9809f0c-metrics-tls\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673526 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4559\" (UniqueName: \"kubernetes.io/projected/7b3707a5-688b-4f49-ba57-4159b9809f0c-kube-api-access-b4559\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673542 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b3707a5-688b-4f49-ba57-4159b9809f0c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673555 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3707a5-688b-4f49-ba57-4159b9809f0c-trusted-ca\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673573 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92274d4b-dd07-43a9-b763-39052f342669-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfggz\" (UID: \"92274d4b-dd07-43a9-b763-39052f342669\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673591 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33215a89-847d-416d-9121-ad7a4e3ae01c-config\") pod \"kube-apiserver-operator-766d6c64bb-jl7h6\" (UID: \"33215a89-847d-416d-9121-ad7a4e3ae01c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673609 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3abe1bf-4556-435c-9aa1-3f9a997f3842-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qh79q\" (UID: \"b3abe1bf-4556-435c-9aa1-3f9a997f3842\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673634 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z5vl\" (UniqueName: \"kubernetes.io/projected/b3abe1bf-4556-435c-9aa1-3f9a997f3842-kube-api-access-2z5vl\") pod \"olm-operator-6b444d44fb-qh79q\" (UID: \"b3abe1bf-4556-435c-9aa1-3f9a997f3842\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673661 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60860626-d1e5-475c-a7aa-ac2a2f1e1593-config\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673683 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33215a89-847d-416d-9121-ad7a4e3ae01c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jl7h6\" (UID: \"33215a89-847d-416d-9121-ad7a4e3ae01c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673734 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92274d4b-dd07-43a9-b763-39052f342669-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfggz\" (UID: \"92274d4b-dd07-43a9-b763-39052f342669\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673761 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60860626-d1e5-475c-a7aa-ac2a2f1e1593-serving-cert\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673781 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jmtg\" (UniqueName: \"kubernetes.io/projected/92274d4b-dd07-43a9-b763-39052f342669-kube-api-access-6jmtg\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfggz\" (UID: \"92274d4b-dd07-43a9-b763-39052f342669\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673797 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3abe1bf-4556-435c-9aa1-3f9a997f3842-srv-cert\") pod \"olm-operator-6b444d44fb-qh79q\" (UID: \"b3abe1bf-4556-435c-9aa1-3f9a997f3842\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.673838 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33215a89-847d-416d-9121-ad7a4e3ae01c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jl7h6\" (UID: \"33215a89-847d-416d-9121-ad7a4e3ae01c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.674881 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60860626-d1e5-475c-a7aa-ac2a2f1e1593-trusted-ca\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.675607 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60860626-d1e5-475c-a7aa-ac2a2f1e1593-config\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.676003 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33215a89-847d-416d-9121-ad7a4e3ae01c-config\") pod \"kube-apiserver-operator-766d6c64bb-jl7h6\" (UID: \"33215a89-847d-416d-9121-ad7a4e3ae01c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.676018 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3707a5-688b-4f49-ba57-4159b9809f0c-trusted-ca\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.676540 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92274d4b-dd07-43a9-b763-39052f342669-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfggz\" (UID: \"92274d4b-dd07-43a9-b763-39052f342669\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.678013 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7b3707a5-688b-4f49-ba57-4159b9809f0c-metrics-tls\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.678137 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33215a89-847d-416d-9121-ad7a4e3ae01c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jl7h6\" (UID: \"33215a89-847d-416d-9121-ad7a4e3ae01c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.678397 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3abe1bf-4556-435c-9aa1-3f9a997f3842-srv-cert\") pod \"olm-operator-6b444d44fb-qh79q\" (UID: \"b3abe1bf-4556-435c-9aa1-3f9a997f3842\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.679348 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3abe1bf-4556-435c-9aa1-3f9a997f3842-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qh79q\" (UID: \"b3abe1bf-4556-435c-9aa1-3f9a997f3842\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.679652 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60860626-d1e5-475c-a7aa-ac2a2f1e1593-serving-cert\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.681826 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92274d4b-dd07-43a9-b763-39052f342669-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfggz\" (UID: \"92274d4b-dd07-43a9-b763-39052f342669\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.690317 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.710228 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.730262 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.749881 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.789346 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.809616 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.830940 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.850382 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.870031 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.891272 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.919050 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.931380 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.951122 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.970665 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 14 06:49:52 crc kubenswrapper[5058]: I1014 06:49:52.991016 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.010687 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.050727 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.070002 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.091428 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.111098 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.130209 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.150333 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.172945 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.190533 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.210621 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.230174 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.250217 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.271426 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.292588 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.310921 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.329849 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.349468 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.371284 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.390763 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.410301 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.431039 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.451282 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.470888 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.490316 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.511496 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.530571 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.550508 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.569123 5058 request.go:700] Waited for 1.016854428s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0 Oct 14 06:49:53 crc kubenswrapper[5058]: E1014 06:49:53.571148 5058 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Oct 14 06:49:53 crc kubenswrapper[5058]: E1014 06:49:53.571321 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-client-ca podName:56f2ec8d-e5f5-41f2-8440-0b771ff55ee9 nodeName:}" failed. No retries permitted until 2025-10-14 06:49:54.07128498 +0000 UTC m=+141.982368796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-client-ca") pod "route-controller-manager-6576b87f9c-79mpw" (UID: "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9") : failed to sync configmap cache: timed out waiting for the condition Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.571642 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 14 06:49:53 crc kubenswrapper[5058]: E1014 06:49:53.572595 5058 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 14 06:49:53 crc kubenswrapper[5058]: E1014 06:49:53.572837 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-serving-cert podName:56f2ec8d-e5f5-41f2-8440-0b771ff55ee9 nodeName:}" failed. No retries permitted until 2025-10-14 06:49:54.072772423 +0000 UTC m=+141.983856439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-serving-cert") pod "route-controller-manager-6576b87f9c-79mpw" (UID: "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9") : failed to sync secret cache: timed out waiting for the condition Oct 14 06:49:53 crc kubenswrapper[5058]: E1014 06:49:53.575887 5058 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Oct 14 06:49:53 crc kubenswrapper[5058]: E1014 06:49:53.575973 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-config podName:56f2ec8d-e5f5-41f2-8440-0b771ff55ee9 nodeName:}" failed. No retries permitted until 2025-10-14 06:49:54.075953505 +0000 UTC m=+141.987037321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-config") pod "route-controller-manager-6576b87f9c-79mpw" (UID: "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9") : failed to sync configmap cache: timed out waiting for the condition Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.590937 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.610285 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.630881 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.651440 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.670337 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.690622 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.710097 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.730564 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.750292 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.798761 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd7rr\" (UniqueName: \"kubernetes.io/projected/6773ec1d-4c68-45d1-ac48-50a04ece6c27-kube-api-access-zd7rr\") pod \"cluster-samples-operator-665b6dd947-kxhxc\" (UID: \"6773ec1d-4c68-45d1-ac48-50a04ece6c27\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.826045 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqxfd\" (UniqueName: \"kubernetes.io/projected/e09468c0-bfc1-42fe-918d-4068041806c0-kube-api-access-gqxfd\") pod \"controller-manager-879f6c89f-mdp6d\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.839584 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p798j\" (UniqueName: \"kubernetes.io/projected/fccdb01a-b512-49e4-9b8f-9a777aea21b9-kube-api-access-p798j\") pod \"apiserver-7bbb656c7d-w55c6\" (UID: \"fccdb01a-b512-49e4-9b8f-9a777aea21b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.852916 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtz9q\" (UniqueName: \"kubernetes.io/projected/8be2923f-db3e-40bf-b821-27e0d8ab84ae-kube-api-access-jtz9q\") pod \"openshift-config-operator-7777fb866f-92kq5\" (UID: \"8be2923f-db3e-40bf-b821-27e0d8ab84ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.871236 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.879106 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klwbg\" (UniqueName: \"kubernetes.io/projected/00fc1c84-bcbb-4986-8096-f3fe338d7955-kube-api-access-klwbg\") pod \"openshift-controller-manager-operator-756b6f6bc6-9hl8v\" (UID: \"00fc1c84-bcbb-4986-8096-f3fe338d7955\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.890965 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.918221 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.938342 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvp67\" (UniqueName: \"kubernetes.io/projected/f05623c8-3bf6-44d3-a522-cb1175227bf4-kube-api-access-nvp67\") pod \"console-f9d7485db-fhbms\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.958630 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhnpv\" (UniqueName: \"kubernetes.io/projected/5be0f431-a32d-4add-b45a-a176ecdf6eb2-kube-api-access-zhnpv\") pod \"machine-approver-56656f9798-s2lt5\" (UID: \"5be0f431-a32d-4add-b45a-a176ecdf6eb2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.962583 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.971702 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 14 06:49:53 crc kubenswrapper[5058]: W1014 06:49:53.983089 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5be0f431_a32d_4add_b45a_a176ecdf6eb2.slice/crio-f7f8490dd429c3a346b69ea828f7afc1cf812439bad5a9f974d65aac312540ca WatchSource:0}: Error finding container f7f8490dd429c3a346b69ea828f7afc1cf812439bad5a9f974d65aac312540ca: Status 404 returned error can't find the container with id f7f8490dd429c3a346b69ea828f7afc1cf812439bad5a9f974d65aac312540ca Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.991759 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.992469 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:53 crc kubenswrapper[5058]: I1014 06:49:53.998606 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.011973 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.014085 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.023611 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.034985 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.051069 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.053893 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.080631 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.092541 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.096382 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-client-ca\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.096456 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-serving-cert\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.096502 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-config\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.111551 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.131080 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.150187 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.170330 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.189698 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.210158 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.233956 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.251349 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.273327 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.288127 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v"] Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.290287 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 14 06:49:54 crc kubenswrapper[5058]: W1014 06:49:54.313642 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00fc1c84_bcbb_4986_8096_f3fe338d7955.slice/crio-ed26df82ff6dc7b2e8a2f5b128723061eb5d8ae7cbb4f94a9c0641a2efc08293 WatchSource:0}: Error finding container ed26df82ff6dc7b2e8a2f5b128723061eb5d8ae7cbb4f94a9c0641a2efc08293: Status 404 returned error can't find the container with id ed26df82ff6dc7b2e8a2f5b128723061eb5d8ae7cbb4f94a9c0641a2efc08293 Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.313768 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.329505 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.340943 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fhbms"] Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.350522 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 14 06:49:54 crc kubenswrapper[5058]: W1014 06:49:54.352719 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf05623c8_3bf6_44d3_a522_cb1175227bf4.slice/crio-7db196072167af6f2bd8f8769180da8684c23881b9483070028b91e0a2691978 WatchSource:0}: Error finding container 7db196072167af6f2bd8f8769180da8684c23881b9483070028b91e0a2691978: Status 404 returned error can't find the container with id 7db196072167af6f2bd8f8769180da8684c23881b9483070028b91e0a2691978 Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.371868 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.389441 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.395744 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-92kq5"] Oct 14 06:49:54 crc kubenswrapper[5058]: W1014 06:49:54.407194 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8be2923f_db3e_40bf_b821_27e0d8ab84ae.slice/crio-11edee61b0b261685252b2569a6a49f5f4072b6e00ce196f8ed1a9aee19773ef WatchSource:0}: Error finding container 11edee61b0b261685252b2569a6a49f5f4072b6e00ce196f8ed1a9aee19773ef: Status 404 returned error can't find the container with id 11edee61b0b261685252b2569a6a49f5f4072b6e00ce196f8ed1a9aee19773ef Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.409700 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.430183 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.439456 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc"] Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.450143 5058 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.470940 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.490756 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.509892 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.529909 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.550137 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.558859 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mdp6d"] Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.562490 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6"] Oct 14 06:49:54 crc kubenswrapper[5058]: W1014 06:49:54.569559 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode09468c0_bfc1_42fe_918d_4068041806c0.slice/crio-d1bbf61b076f654ae38cf4e1efa7b041f7ce2a97a8a69dd0cb165796a97be110 WatchSource:0}: Error finding container d1bbf61b076f654ae38cf4e1efa7b041f7ce2a97a8a69dd0cb165796a97be110: Status 404 returned error can't find the container with id d1bbf61b076f654ae38cf4e1efa7b041f7ce2a97a8a69dd0cb165796a97be110 Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.571415 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 14 06:49:54 crc kubenswrapper[5058]: W1014 06:49:54.572744 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfccdb01a_b512_49e4_9b8f_9a777aea21b9.slice/crio-b2fbb5e2fce9d14a6db2d506ae0399955c4c20429ff3da6f42169b70986cf593 WatchSource:0}: Error finding container b2fbb5e2fce9d14a6db2d506ae0399955c4c20429ff3da6f42169b70986cf593: Status 404 returned error can't find the container with id b2fbb5e2fce9d14a6db2d506ae0399955c4c20429ff3da6f42169b70986cf593 Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.588203 5058 request.go:700] Waited for 1.92596279s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.589873 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.629871 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33215a89-847d-416d-9121-ad7a4e3ae01c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jl7h6\" (UID: \"33215a89-847d-416d-9121-ad7a4e3ae01c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.638302 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" event={"ID":"e09468c0-bfc1-42fe-918d-4068041806c0","Type":"ContainerStarted","Data":"d1bbf61b076f654ae38cf4e1efa7b041f7ce2a97a8a69dd0cb165796a97be110"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.639247 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" event={"ID":"fccdb01a-b512-49e4-9b8f-9a777aea21b9","Type":"ContainerStarted","Data":"b2fbb5e2fce9d14a6db2d506ae0399955c4c20429ff3da6f42169b70986cf593"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.642647 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" event={"ID":"5be0f431-a32d-4add-b45a-a176ecdf6eb2","Type":"ContainerStarted","Data":"7d2ef50f701cdd1509ec347b2be332668c11dc8ffafb997868c5b08d3e0685ac"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.642674 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" event={"ID":"5be0f431-a32d-4add-b45a-a176ecdf6eb2","Type":"ContainerStarted","Data":"13597f275f910551182ca95ad085befe643226313c8c8f9bb7406627fff033f2"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.642684 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" event={"ID":"5be0f431-a32d-4add-b45a-a176ecdf6eb2","Type":"ContainerStarted","Data":"f7f8490dd429c3a346b69ea828f7afc1cf812439bad5a9f974d65aac312540ca"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.645500 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" event={"ID":"8be2923f-db3e-40bf-b821-27e0d8ab84ae","Type":"ContainerStarted","Data":"d36d69a1863153a2a2329caba2ed68fe351cea42dc3b29f0d2d2edff7353aa5b"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.645526 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" event={"ID":"8be2923f-db3e-40bf-b821-27e0d8ab84ae","Type":"ContainerStarted","Data":"11edee61b0b261685252b2569a6a49f5f4072b6e00ce196f8ed1a9aee19773ef"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.646085 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpbm4\" (UniqueName: \"kubernetes.io/projected/60860626-d1e5-475c-a7aa-ac2a2f1e1593-kube-api-access-rpbm4\") pod \"console-operator-58897d9998-qhldv\" (UID: \"60860626-d1e5-475c-a7aa-ac2a2f1e1593\") " pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.647389 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" event={"ID":"6773ec1d-4c68-45d1-ac48-50a04ece6c27","Type":"ContainerStarted","Data":"8b642ace8869a97102d9c2babfab0e46bed119dd78258954a18f3ecfbb07304d"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.649311 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" event={"ID":"00fc1c84-bcbb-4986-8096-f3fe338d7955","Type":"ContainerStarted","Data":"1b3dc65062364e287e053e3dc088bd089a8ce45585caf8dedf7e67f99f31927a"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.649345 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" event={"ID":"00fc1c84-bcbb-4986-8096-f3fe338d7955","Type":"ContainerStarted","Data":"ed26df82ff6dc7b2e8a2f5b128723061eb5d8ae7cbb4f94a9c0641a2efc08293"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.651979 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fhbms" event={"ID":"f05623c8-3bf6-44d3-a522-cb1175227bf4","Type":"ContainerStarted","Data":"b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.652026 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fhbms" event={"ID":"f05623c8-3bf6-44d3-a522-cb1175227bf4","Type":"ContainerStarted","Data":"7db196072167af6f2bd8f8769180da8684c23881b9483070028b91e0a2691978"} Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.668479 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z5vl\" (UniqueName: \"kubernetes.io/projected/b3abe1bf-4556-435c-9aa1-3f9a997f3842-kube-api-access-2z5vl\") pod \"olm-operator-6b444d44fb-qh79q\" (UID: \"b3abe1bf-4556-435c-9aa1-3f9a997f3842\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.682353 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4559\" (UniqueName: \"kubernetes.io/projected/7b3707a5-688b-4f49-ba57-4159b9809f0c-kube-api-access-b4559\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.704488 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b3707a5-688b-4f49-ba57-4159b9809f0c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xsms7\" (UID: \"7b3707a5-688b-4f49-ba57-4159b9809f0c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.723860 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jmtg\" (UniqueName: \"kubernetes.io/projected/92274d4b-dd07-43a9-b763-39052f342669-kube-api-access-6jmtg\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfggz\" (UID: \"92274d4b-dd07-43a9-b763-39052f342669\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.726435 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.739515 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.748435 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.755498 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.764046 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.770556 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.783581 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-serving-cert\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.792293 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.800579 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-config\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805482 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrc5\" (UniqueName: \"kubernetes.io/projected/5ff50c24-42d7-4716-bf86-a6bc553086d6-kube-api-access-wkrc5\") pod \"downloads-7954f5f757-htb67\" (UID: \"5ff50c24-42d7-4716-bf86-a6bc553086d6\") " pod="openshift-console/downloads-7954f5f757-htb67" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805534 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805565 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1b5ef4-b161-48dd-9e71-6217b04acf89-config\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805583 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-etcd-serving-ca\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805608 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9746q\" (UniqueName: \"kubernetes.io/projected/ee1b5ef4-b161-48dd-9e71-6217b04acf89-kube-api-access-9746q\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805625 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-serving-cert\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805642 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5750534-5283-432b-8290-bc8215218d19-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805658 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33-metrics-tls\") pod \"dns-operator-744455d44c-rnqgw\" (UID: \"9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805675 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzq48\" (UniqueName: \"kubernetes.io/projected/c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc-kube-api-access-zzq48\") pod \"openshift-apiserver-operator-796bbdcf4f-r7sc7\" (UID: \"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805692 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pf9\" (UniqueName: \"kubernetes.io/projected/853301cc-6aa1-4599-bf5a-58b9014da5da-kube-api-access-76pf9\") pod \"machine-config-controller-84d6567774-k84rv\" (UID: \"853301cc-6aa1-4599-bf5a-58b9014da5da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805711 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-audit\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805728 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35c3118a-120e-45b8-80fc-1c939c4a7e85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805756 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805773 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805815 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9s2\" (UniqueName: \"kubernetes.io/projected/9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33-kube-api-access-zn9s2\") pod \"dns-operator-744455d44c-rnqgw\" (UID: \"9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805833 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-tls\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805849 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee1b5ef4-b161-48dd-9e71-6217b04acf89-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805866 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-244cd\" (UniqueName: \"kubernetes.io/projected/2e40dad8-9b26-4337-9231-d7b952047695-kube-api-access-244cd\") pod \"multus-admission-controller-857f4d67dd-hzzx7\" (UID: \"2e40dad8-9b26-4337-9231-d7b952047695\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805881 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/20b4f849-7c8c-43f7-858a-1db881105d28-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805895 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4z65\" (UniqueName: \"kubernetes.io/projected/5deb3681-f574-40e7-b0dd-3048d164de55-kube-api-access-q4z65\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805921 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35c3118a-120e-45b8-80fc-1c939c4a7e85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805954 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r7sc7\" (UID: \"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.805970 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806007 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee1b5ef4-b161-48dd-9e71-6217b04acf89-service-ca-bundle\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806022 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dwg\" (UniqueName: \"kubernetes.io/projected/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-kube-api-access-d8dwg\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806036 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806049 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-etcd-client\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806069 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806087 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-bound-sa-token\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806103 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-trusted-ca\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806118 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806150 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b4f849-7c8c-43f7-858a-1db881105d28-config\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806164 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5750534-5283-432b-8290-bc8215218d19-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806189 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-audit-dir\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806205 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4p8\" (UniqueName: \"kubernetes.io/projected/a5750534-5283-432b-8290-bc8215218d19-kube-api-access-7w4p8\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806221 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee1b5ef4-b161-48dd-9e71-6217b04acf89-serving-cert\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806239 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-audit-policies\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806259 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/853301cc-6aa1-4599-bf5a-58b9014da5da-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k84rv\" (UID: \"853301cc-6aa1-4599-bf5a-58b9014da5da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806293 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806656 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p8p6\" (UniqueName: \"kubernetes.io/projected/20b4f849-7c8c-43f7-858a-1db881105d28-kube-api-access-8p8p6\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806728 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-node-pullsecrets\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806832 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806895 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806929 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-certificates\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806946 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2e40dad8-9b26-4337-9231-d7b952047695-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hzzx7\" (UID: \"2e40dad8-9b26-4337-9231-d7b952047695\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.806963 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807339 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-config\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807370 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-encryption-config\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807402 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89rcw\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-kube-api-access-89rcw\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: E1014 06:49:54.807420 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:55.307407008 +0000 UTC m=+143.218490814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807443 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/853301cc-6aa1-4599-bf5a-58b9014da5da-proxy-tls\") pod \"machine-config-controller-84d6567774-k84rv\" (UID: \"853301cc-6aa1-4599-bf5a-58b9014da5da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807464 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807644 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r7sc7\" (UID: \"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807682 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5deb3681-f574-40e7-b0dd-3048d164de55-audit-dir\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807701 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5750534-5283-432b-8290-bc8215218d19-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807747 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-image-import-ca\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807767 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b4f849-7c8c-43f7-858a-1db881105d28-images\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.807831 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.809303 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.832912 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-client-ca\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.832980 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.842290 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t584j\" (UniqueName: \"kubernetes.io/projected/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-kube-api-access-t584j\") pod \"route-controller-manager-6576b87f9c-79mpw\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910283 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:54 crc kubenswrapper[5058]: E1014 06:49:54.910420 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:55.410399366 +0000 UTC m=+143.321483172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910452 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910476 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dwg\" (UniqueName: \"kubernetes.io/projected/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-kube-api-access-d8dwg\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910496 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910514 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-bound-sa-token\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910557 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910591 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76dbc61e-486b-4a3f-9d1d-6e310fe49672-profile-collector-cert\") pod \"catalog-operator-68c6474976-tt6rv\" (UID: \"76dbc61e-486b-4a3f-9d1d-6e310fe49672\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910610 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-audit-dir\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910626 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/393cb85f-5c30-42a2-94da-65eddc249939-serving-cert\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910641 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910655 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af0798d9-84b9-49ce-8485-8b6efe5fcd42-webhook-cert\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910679 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s5659\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910695 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ce7d92-f8af-4edc-bbff-593f1c9329e1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hrvbw\" (UID: \"24ce7d92-f8af-4edc-bbff-593f1c9329e1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910713 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/853301cc-6aa1-4599-bf5a-58b9014da5da-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k84rv\" (UID: \"853301cc-6aa1-4599-bf5a-58b9014da5da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910730 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910747 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p8p6\" (UniqueName: \"kubernetes.io/projected/20b4f849-7c8c-43f7-858a-1db881105d28-kube-api-access-8p8p6\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910764 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrv7m\" (UniqueName: \"kubernetes.io/projected/2caab735-0708-478f-83af-e87fe20f22b8-kube-api-access-xrv7m\") pod \"package-server-manager-789f6589d5-cd2lf\" (UID: \"2caab735-0708-478f-83af-e87fe20f22b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:49:54 crc kubenswrapper[5058]: E1014 06:49:54.910773 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:55.410760676 +0000 UTC m=+143.321844482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910821 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-node-pullsecrets\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910848 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910867 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6d98\" (UniqueName: \"kubernetes.io/projected/267d0ed6-2fab-4d5f-a007-dbb9856b45de-kube-api-access-d6d98\") pod \"dns-default-j42gk\" (UID: \"267d0ed6-2fab-4d5f-a007-dbb9856b45de\") " pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910883 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76dbc61e-486b-4a3f-9d1d-6e310fe49672-srv-cert\") pod \"catalog-operator-68c6474976-tt6rv\" (UID: \"76dbc61e-486b-4a3f-9d1d-6e310fe49672\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910907 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-config\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910933 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-encryption-config\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910948 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2e40dad8-9b26-4337-9231-d7b952047695-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hzzx7\" (UID: \"2e40dad8-9b26-4337-9231-d7b952047695\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910965 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.910980 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6gvz\" (UniqueName: \"kubernetes.io/projected/bb740ddb-f795-4ed4-b94a-7bead0d054e7-kube-api-access-z6gvz\") pod \"migrator-59844c95c7-qmknz\" (UID: \"bb740ddb-f795-4ed4-b94a-7bead0d054e7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911020 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89rcw\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-kube-api-access-89rcw\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911038 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r7sc7\" (UID: \"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911055 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5deb3681-f574-40e7-b0dd-3048d164de55-audit-dir\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911071 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b4f849-7c8c-43f7-858a-1db881105d28-images\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911111 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hs7p\" (UniqueName: \"kubernetes.io/projected/c799ff35-37f9-4fa7-b254-30b4936f6af8-kube-api-access-9hs7p\") pod \"service-ca-9c57cc56f-n4mdl\" (UID: \"c799ff35-37f9-4fa7-b254-30b4936f6af8\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911129 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d69e9ea9-ae21-42ae-a6f6-845adda33d54-stats-auth\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911149 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkrc5\" (UniqueName: \"kubernetes.io/projected/5ff50c24-42d7-4716-bf86-a6bc553086d6-kube-api-access-wkrc5\") pod \"downloads-7954f5f757-htb67\" (UID: \"5ff50c24-42d7-4716-bf86-a6bc553086d6\") " pod="openshift-console/downloads-7954f5f757-htb67" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911165 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911182 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54tfr\" (UniqueName: \"kubernetes.io/projected/af0798d9-84b9-49ce-8485-8b6efe5fcd42-kube-api-access-54tfr\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911256 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/376b3099-2ce3-4c4c-b3c3-33cd426751aa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lhzvv\" (UID: \"376b3099-2ce3-4c4c-b3c3-33cd426751aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911278 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t25vb\" (UniqueName: \"kubernetes.io/projected/d69e9ea9-ae21-42ae-a6f6-845adda33d54-kube-api-access-t25vb\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911297 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-serving-cert\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911313 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5750534-5283-432b-8290-bc8215218d19-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911328 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9746q\" (UniqueName: \"kubernetes.io/projected/ee1b5ef4-b161-48dd-9e71-6217b04acf89-kube-api-access-9746q\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911343 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33-metrics-tls\") pod \"dns-operator-744455d44c-rnqgw\" (UID: \"9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911358 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzq48\" (UniqueName: \"kubernetes.io/projected/c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc-kube-api-access-zzq48\") pod \"openshift-apiserver-operator-796bbdcf4f-r7sc7\" (UID: \"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911373 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/376b3099-2ce3-4c4c-b3c3-33cd426751aa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lhzvv\" (UID: \"376b3099-2ce3-4c4c-b3c3-33cd426751aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911390 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911405 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911422 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1db7f396-e065-4bc0-92fa-712ccc5071e5-node-bootstrap-token\") pod \"machine-config-server-qq72t\" (UID: \"1db7f396-e065-4bc0-92fa-712ccc5071e5\") " pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911436 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d69e9ea9-ae21-42ae-a6f6-845adda33d54-metrics-certs\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911462 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/20b4f849-7c8c-43f7-858a-1db881105d28-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911477 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4z65\" (UniqueName: \"kubernetes.io/projected/5deb3681-f574-40e7-b0dd-3048d164de55-kube-api-access-q4z65\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911493 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911510 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmr6\" (UniqueName: \"kubernetes.io/projected/f24d0800-742d-4261-a6bc-f29ff2559c67-kube-api-access-bkmr6\") pod \"service-ca-operator-777779d784-m8bnr\" (UID: \"f24d0800-742d-4261-a6bc-f29ff2559c67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911525 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/376b3099-2ce3-4c4c-b3c3-33cd426751aa-config\") pod \"kube-controller-manager-operator-78b949d7b-lhzvv\" (UID: \"376b3099-2ce3-4c4c-b3c3-33cd426751aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911542 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-mountpoint-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911558 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/393cb85f-5c30-42a2-94da-65eddc249939-etcd-client\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911573 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee1b5ef4-b161-48dd-9e71-6217b04acf89-service-ca-bundle\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911590 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdwk\" (UniqueName: \"kubernetes.io/projected/5fb110f6-ed24-411b-a654-cd7aac0538b6-kube-api-access-drdwk\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7kwd\" (UID: \"5fb110f6-ed24-411b-a654-cd7aac0538b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911605 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cflg\" (UniqueName: \"kubernetes.io/projected/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-kube-api-access-6cflg\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911619 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-registration-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911644 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-etcd-client\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911662 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gscfg\" (UniqueName: \"kubernetes.io/projected/21155744-ed89-42cc-b09d-cd8dd1896e6d-kube-api-access-gscfg\") pod \"marketplace-operator-79b997595-s5659\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911682 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-trusted-ca\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911696 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-socket-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911710 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdzms\" (UniqueName: \"kubernetes.io/projected/0f3b2066-2ea9-421b-863d-defe1f13cf46-kube-api-access-zdzms\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911735 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b4f849-7c8c-43f7-858a-1db881105d28-config\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911746 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-config\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911750 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5750534-5283-432b-8290-bc8215218d19-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911783 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-plugins-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911832 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee1b5ef4-b161-48dd-9e71-6217b04acf89-serving-cert\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911850 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-audit-policies\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911867 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4p8\" (UniqueName: \"kubernetes.io/projected/a5750534-5283-432b-8290-bc8215218d19-kube-api-access-7w4p8\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911885 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/393cb85f-5c30-42a2-94da-65eddc249939-etcd-ca\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911920 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fb110f6-ed24-411b-a654-cd7aac0538b6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7kwd\" (UID: \"5fb110f6-ed24-411b-a654-cd7aac0538b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911936 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s5659\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911951 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24ce7d92-f8af-4edc-bbff-593f1c9329e1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hrvbw\" (UID: \"24ce7d92-f8af-4edc-bbff-593f1c9329e1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911980 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.911996 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-config-volume\") pod \"collect-profiles-29340405-tpnng\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912013 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-certificates\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912029 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/267d0ed6-2fab-4d5f-a007-dbb9856b45de-config-volume\") pod \"dns-default-j42gk\" (UID: \"267d0ed6-2fab-4d5f-a007-dbb9856b45de\") " pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912044 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/393cb85f-5c30-42a2-94da-65eddc249939-config\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912058 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d69e9ea9-ae21-42ae-a6f6-845adda33d54-default-certificate\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912074 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2caab735-0708-478f-83af-e87fe20f22b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cd2lf\" (UID: \"2caab735-0708-478f-83af-e87fe20f22b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912090 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24d0800-742d-4261-a6bc-f29ff2559c67-config\") pod \"service-ca-operator-777779d784-m8bnr\" (UID: \"f24d0800-742d-4261-a6bc-f29ff2559c67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912109 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/853301cc-6aa1-4599-bf5a-58b9014da5da-proxy-tls\") pod \"machine-config-controller-84d6567774-k84rv\" (UID: \"853301cc-6aa1-4599-bf5a-58b9014da5da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912124 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912139 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-images\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912174 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5750534-5283-432b-8290-bc8215218d19-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912208 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-image-import-ca\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912224 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c799ff35-37f9-4fa7-b254-30b4936f6af8-signing-cabundle\") pod \"service-ca-9c57cc56f-n4mdl\" (UID: \"c799ff35-37f9-4fa7-b254-30b4936f6af8\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912241 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ce7d92-f8af-4edc-bbff-593f1c9329e1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hrvbw\" (UID: \"24ce7d92-f8af-4edc-bbff-593f1c9329e1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912259 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912285 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4dcc1fd-04a7-466a-babe-ee39ca4a44e0-cert\") pod \"ingress-canary-tbg6n\" (UID: \"c4dcc1fd-04a7-466a-babe-ee39ca4a44e0\") " pod="openshift-ingress-canary/ingress-canary-tbg6n" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912311 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/267d0ed6-2fab-4d5f-a007-dbb9856b45de-metrics-tls\") pod \"dns-default-j42gk\" (UID: \"267d0ed6-2fab-4d5f-a007-dbb9856b45de\") " pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912329 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1b5ef4-b161-48dd-9e71-6217b04acf89-config\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912345 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-etcd-serving-ca\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912362 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/af0798d9-84b9-49ce-8485-8b6efe5fcd42-tmpfs\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912376 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af0798d9-84b9-49ce-8485-8b6efe5fcd42-apiservice-cert\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912386 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-node-pullsecrets\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912403 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5ngq\" (UniqueName: \"kubernetes.io/projected/1db7f396-e065-4bc0-92fa-712ccc5071e5-kube-api-access-q5ngq\") pod \"machine-config-server-qq72t\" (UID: \"1db7f396-e065-4bc0-92fa-712ccc5071e5\") " pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912418 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-csi-data-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912432 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckjwn\" (UniqueName: \"kubernetes.io/projected/76dbc61e-486b-4a3f-9d1d-6e310fe49672-kube-api-access-ckjwn\") pod \"catalog-operator-68c6474976-tt6rv\" (UID: \"76dbc61e-486b-4a3f-9d1d-6e310fe49672\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912449 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76pf9\" (UniqueName: \"kubernetes.io/projected/853301cc-6aa1-4599-bf5a-58b9014da5da-kube-api-access-76pf9\") pod \"machine-config-controller-84d6567774-k84rv\" (UID: \"853301cc-6aa1-4599-bf5a-58b9014da5da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912464 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-audit\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912482 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35c3118a-120e-45b8-80fc-1c939c4a7e85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912494 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5750534-5283-432b-8290-bc8215218d19-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912510 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9s2\" (UniqueName: \"kubernetes.io/projected/9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33-kube-api-access-zn9s2\") pod \"dns-operator-744455d44c-rnqgw\" (UID: \"9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912526 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/393cb85f-5c30-42a2-94da-65eddc249939-etcd-service-ca\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912539 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-audit-dir\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912542 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-secret-volume\") pod \"collect-profiles-29340405-tpnng\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912557 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2nd\" (UniqueName: \"kubernetes.io/projected/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-kube-api-access-kb2nd\") pod \"collect-profiles-29340405-tpnng\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912574 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-tls\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912589 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee1b5ef4-b161-48dd-9e71-6217b04acf89-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912614 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-244cd\" (UniqueName: \"kubernetes.io/projected/2e40dad8-9b26-4337-9231-d7b952047695-kube-api-access-244cd\") pod \"multus-admission-controller-857f4d67dd-hzzx7\" (UID: \"2e40dad8-9b26-4337-9231-d7b952047695\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912630 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c799ff35-37f9-4fa7-b254-30b4936f6af8-signing-key\") pod \"service-ca-9c57cc56f-n4mdl\" (UID: \"c799ff35-37f9-4fa7-b254-30b4936f6af8\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912655 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35c3118a-120e-45b8-80fc-1c939c4a7e85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912671 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8ww\" (UniqueName: \"kubernetes.io/projected/393cb85f-5c30-42a2-94da-65eddc249939-kube-api-access-kn8ww\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912686 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b89pn\" (UniqueName: \"kubernetes.io/projected/c4dcc1fd-04a7-466a-babe-ee39ca4a44e0-kube-api-access-b89pn\") pod \"ingress-canary-tbg6n\" (UID: \"c4dcc1fd-04a7-466a-babe-ee39ca4a44e0\") " pod="openshift-ingress-canary/ingress-canary-tbg6n" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912719 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r7sc7\" (UID: \"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912733 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-proxy-tls\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912766 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f24d0800-742d-4261-a6bc-f29ff2559c67-serving-cert\") pod \"service-ca-operator-777779d784-m8bnr\" (UID: \"f24d0800-742d-4261-a6bc-f29ff2559c67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912780 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1db7f396-e065-4bc0-92fa-712ccc5071e5-certs\") pod \"machine-config-server-qq72t\" (UID: \"1db7f396-e065-4bc0-92fa-712ccc5071e5\") " pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.912797 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d69e9ea9-ae21-42ae-a6f6-845adda33d54-service-ca-bundle\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.914985 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.915851 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1b5ef4-b161-48dd-9e71-6217b04acf89-config\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.915884 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee1b5ef4-b161-48dd-9e71-6217b04acf89-service-ca-bundle\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.916981 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-etcd-serving-ca\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.917231 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.917699 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-audit\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.917947 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35c3118a-120e-45b8-80fc-1c939c4a7e85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.919593 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.920233 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-trusted-ca\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.920511 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.921098 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/853301cc-6aa1-4599-bf5a-58b9014da5da-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k84rv\" (UID: \"853301cc-6aa1-4599-bf5a-58b9014da5da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.921406 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-image-import-ca\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.921646 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/20b4f849-7c8c-43f7-858a-1db881105d28-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.922259 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.923571 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b4f849-7c8c-43f7-858a-1db881105d28-images\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.923768 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-etcd-client\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.924111 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5750534-5283-432b-8290-bc8215218d19-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.924516 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.924580 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35c3118a-120e-45b8-80fc-1c939c4a7e85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.925157 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r7sc7\" (UID: \"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.925183 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5deb3681-f574-40e7-b0dd-3048d164de55-audit-dir\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.925645 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.925934 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee1b5ef4-b161-48dd-9e71-6217b04acf89-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.927876 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b4f849-7c8c-43f7-858a-1db881105d28-config\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.928718 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-tls\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.929083 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r7sc7\" (UID: \"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.929264 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-serving-cert\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.929348 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.931338 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-audit-policies\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.931508 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-certificates\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.933045 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.933447 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee1b5ef4-b161-48dd-9e71-6217b04acf89-serving-cert\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.934234 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33-metrics-tls\") pod \"dns-operator-744455d44c-rnqgw\" (UID: \"9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.934346 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.939790 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2e40dad8-9b26-4337-9231-d7b952047695-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hzzx7\" (UID: \"2e40dad8-9b26-4337-9231-d7b952047695\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.940201 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-encryption-config\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.941714 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.942459 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/853301cc-6aa1-4599-bf5a-58b9014da5da-proxy-tls\") pod \"machine-config-controller-84d6567774-k84rv\" (UID: \"853301cc-6aa1-4599-bf5a-58b9014da5da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.948026 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dwg\" (UniqueName: \"kubernetes.io/projected/cb29cc9b-9890-4f0d-a4ca-bfc93dce8628-kube-api-access-d8dwg\") pod \"apiserver-76f77b778f-m4grl\" (UID: \"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628\") " pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.959109 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.964832 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkrc5\" (UniqueName: \"kubernetes.io/projected/5ff50c24-42d7-4716-bf86-a6bc553086d6-kube-api-access-wkrc5\") pod \"downloads-7954f5f757-htb67\" (UID: \"5ff50c24-42d7-4716-bf86-a6bc553086d6\") " pod="openshift-console/downloads-7954f5f757-htb67" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.970185 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.978048 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz"] Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.989247 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzq48\" (UniqueName: \"kubernetes.io/projected/c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc-kube-api-access-zzq48\") pod \"openshift-apiserver-operator-796bbdcf4f-r7sc7\" (UID: \"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:54 crc kubenswrapper[5058]: I1014 06:49:54.995506 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.005284 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9746q\" (UniqueName: \"kubernetes.io/projected/ee1b5ef4-b161-48dd-9e71-6217b04acf89-kube-api-access-9746q\") pod \"authentication-operator-69f744f599-7d7q2\" (UID: \"ee1b5ef4-b161-48dd-9e71-6217b04acf89\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.010692 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018342 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018535 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/376b3099-2ce3-4c4c-b3c3-33cd426751aa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lhzvv\" (UID: \"376b3099-2ce3-4c4c-b3c3-33cd426751aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018558 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1db7f396-e065-4bc0-92fa-712ccc5071e5-node-bootstrap-token\") pod \"machine-config-server-qq72t\" (UID: \"1db7f396-e065-4bc0-92fa-712ccc5071e5\") " pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018576 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d69e9ea9-ae21-42ae-a6f6-845adda33d54-metrics-certs\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018596 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/376b3099-2ce3-4c4c-b3c3-33cd426751aa-config\") pod \"kube-controller-manager-operator-78b949d7b-lhzvv\" (UID: \"376b3099-2ce3-4c4c-b3c3-33cd426751aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018611 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-mountpoint-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018628 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkmr6\" (UniqueName: \"kubernetes.io/projected/f24d0800-742d-4261-a6bc-f29ff2559c67-kube-api-access-bkmr6\") pod \"service-ca-operator-777779d784-m8bnr\" (UID: \"f24d0800-742d-4261-a6bc-f29ff2559c67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018647 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/393cb85f-5c30-42a2-94da-65eddc249939-etcd-client\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018663 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdwk\" (UniqueName: \"kubernetes.io/projected/5fb110f6-ed24-411b-a654-cd7aac0538b6-kube-api-access-drdwk\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7kwd\" (UID: \"5fb110f6-ed24-411b-a654-cd7aac0538b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018696 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cflg\" (UniqueName: \"kubernetes.io/projected/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-kube-api-access-6cflg\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018710 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-registration-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018727 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gscfg\" (UniqueName: \"kubernetes.io/projected/21155744-ed89-42cc-b09d-cd8dd1896e6d-kube-api-access-gscfg\") pod \"marketplace-operator-79b997595-s5659\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018741 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-socket-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018756 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdzms\" (UniqueName: \"kubernetes.io/projected/0f3b2066-2ea9-421b-863d-defe1f13cf46-kube-api-access-zdzms\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018769 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-plugins-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018797 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/393cb85f-5c30-42a2-94da-65eddc249939-etcd-ca\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018827 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24ce7d92-f8af-4edc-bbff-593f1c9329e1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hrvbw\" (UID: \"24ce7d92-f8af-4edc-bbff-593f1c9329e1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018845 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fb110f6-ed24-411b-a654-cd7aac0538b6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7kwd\" (UID: \"5fb110f6-ed24-411b-a654-cd7aac0538b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018865 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s5659\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018901 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-config-volume\") pod \"collect-profiles-29340405-tpnng\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018918 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2caab735-0708-478f-83af-e87fe20f22b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cd2lf\" (UID: \"2caab735-0708-478f-83af-e87fe20f22b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018933 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/267d0ed6-2fab-4d5f-a007-dbb9856b45de-config-volume\") pod \"dns-default-j42gk\" (UID: \"267d0ed6-2fab-4d5f-a007-dbb9856b45de\") " pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018947 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/393cb85f-5c30-42a2-94da-65eddc249939-config\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018961 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d69e9ea9-ae21-42ae-a6f6-845adda33d54-default-certificate\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018976 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24d0800-742d-4261-a6bc-f29ff2559c67-config\") pod \"service-ca-operator-777779d784-m8bnr\" (UID: \"f24d0800-742d-4261-a6bc-f29ff2559c67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.018993 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-images\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019009 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c799ff35-37f9-4fa7-b254-30b4936f6af8-signing-cabundle\") pod \"service-ca-9c57cc56f-n4mdl\" (UID: \"c799ff35-37f9-4fa7-b254-30b4936f6af8\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019023 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ce7d92-f8af-4edc-bbff-593f1c9329e1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hrvbw\" (UID: \"24ce7d92-f8af-4edc-bbff-593f1c9329e1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019041 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4dcc1fd-04a7-466a-babe-ee39ca4a44e0-cert\") pod \"ingress-canary-tbg6n\" (UID: \"c4dcc1fd-04a7-466a-babe-ee39ca4a44e0\") " pod="openshift-ingress-canary/ingress-canary-tbg6n" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019060 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/267d0ed6-2fab-4d5f-a007-dbb9856b45de-metrics-tls\") pod \"dns-default-j42gk\" (UID: \"267d0ed6-2fab-4d5f-a007-dbb9856b45de\") " pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019081 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af0798d9-84b9-49ce-8485-8b6efe5fcd42-apiservice-cert\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019096 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/af0798d9-84b9-49ce-8485-8b6efe5fcd42-tmpfs\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019113 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-csi-data-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019128 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckjwn\" (UniqueName: \"kubernetes.io/projected/76dbc61e-486b-4a3f-9d1d-6e310fe49672-kube-api-access-ckjwn\") pod \"catalog-operator-68c6474976-tt6rv\" (UID: \"76dbc61e-486b-4a3f-9d1d-6e310fe49672\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019143 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5ngq\" (UniqueName: \"kubernetes.io/projected/1db7f396-e065-4bc0-92fa-712ccc5071e5-kube-api-access-q5ngq\") pod \"machine-config-server-qq72t\" (UID: \"1db7f396-e065-4bc0-92fa-712ccc5071e5\") " pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019166 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-secret-volume\") pod \"collect-profiles-29340405-tpnng\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019180 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2nd\" (UniqueName: \"kubernetes.io/projected/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-kube-api-access-kb2nd\") pod \"collect-profiles-29340405-tpnng\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019200 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/393cb85f-5c30-42a2-94da-65eddc249939-etcd-service-ca\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019220 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c799ff35-37f9-4fa7-b254-30b4936f6af8-signing-key\") pod \"service-ca-9c57cc56f-n4mdl\" (UID: \"c799ff35-37f9-4fa7-b254-30b4936f6af8\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019236 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8ww\" (UniqueName: \"kubernetes.io/projected/393cb85f-5c30-42a2-94da-65eddc249939-kube-api-access-kn8ww\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019250 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b89pn\" (UniqueName: \"kubernetes.io/projected/c4dcc1fd-04a7-466a-babe-ee39ca4a44e0-kube-api-access-b89pn\") pod \"ingress-canary-tbg6n\" (UID: \"c4dcc1fd-04a7-466a-babe-ee39ca4a44e0\") " pod="openshift-ingress-canary/ingress-canary-tbg6n" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019281 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-proxy-tls\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019296 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d69e9ea9-ae21-42ae-a6f6-845adda33d54-service-ca-bundle\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019317 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f24d0800-742d-4261-a6bc-f29ff2559c67-serving-cert\") pod \"service-ca-operator-777779d784-m8bnr\" (UID: \"f24d0800-742d-4261-a6bc-f29ff2559c67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019331 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1db7f396-e065-4bc0-92fa-712ccc5071e5-certs\") pod \"machine-config-server-qq72t\" (UID: \"1db7f396-e065-4bc0-92fa-712ccc5071e5\") " pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019359 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76dbc61e-486b-4a3f-9d1d-6e310fe49672-profile-collector-cert\") pod \"catalog-operator-68c6474976-tt6rv\" (UID: \"76dbc61e-486b-4a3f-9d1d-6e310fe49672\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019365 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-plugins-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019373 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af0798d9-84b9-49ce-8485-8b6efe5fcd42-webhook-cert\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019468 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/393cb85f-5c30-42a2-94da-65eddc249939-serving-cert\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019509 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019530 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s5659\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019549 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ce7d92-f8af-4edc-bbff-593f1c9329e1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hrvbw\" (UID: \"24ce7d92-f8af-4edc-bbff-593f1c9329e1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019637 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrv7m\" (UniqueName: \"kubernetes.io/projected/2caab735-0708-478f-83af-e87fe20f22b8-kube-api-access-xrv7m\") pod \"package-server-manager-789f6589d5-cd2lf\" (UID: \"2caab735-0708-478f-83af-e87fe20f22b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019700 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6d98\" (UniqueName: \"kubernetes.io/projected/267d0ed6-2fab-4d5f-a007-dbb9856b45de-kube-api-access-d6d98\") pod \"dns-default-j42gk\" (UID: \"267d0ed6-2fab-4d5f-a007-dbb9856b45de\") " pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019827 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76dbc61e-486b-4a3f-9d1d-6e310fe49672-srv-cert\") pod \"catalog-operator-68c6474976-tt6rv\" (UID: \"76dbc61e-486b-4a3f-9d1d-6e310fe49672\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019861 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6gvz\" (UniqueName: \"kubernetes.io/projected/bb740ddb-f795-4ed4-b94a-7bead0d054e7-kube-api-access-z6gvz\") pod \"migrator-59844c95c7-qmknz\" (UID: \"bb740ddb-f795-4ed4-b94a-7bead0d054e7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019903 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hs7p\" (UniqueName: \"kubernetes.io/projected/c799ff35-37f9-4fa7-b254-30b4936f6af8-kube-api-access-9hs7p\") pod \"service-ca-9c57cc56f-n4mdl\" (UID: \"c799ff35-37f9-4fa7-b254-30b4936f6af8\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.019918 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d69e9ea9-ae21-42ae-a6f6-845adda33d54-stats-auth\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.020206 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54tfr\" (UniqueName: \"kubernetes.io/projected/af0798d9-84b9-49ce-8485-8b6efe5fcd42-kube-api-access-54tfr\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.020244 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t25vb\" (UniqueName: \"kubernetes.io/projected/d69e9ea9-ae21-42ae-a6f6-845adda33d54-kube-api-access-t25vb\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.020263 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/376b3099-2ce3-4c4c-b3c3-33cd426751aa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lhzvv\" (UID: \"376b3099-2ce3-4c4c-b3c3-33cd426751aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:55 crc kubenswrapper[5058]: W1014 06:49:55.020496 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3abe1bf_4556_435c_9aa1_3f9a997f3842.slice/crio-5de02cfffea61815b2ce515a1cbd21c3cd9a1e1a4a24f2d5c2a338039f91883c WatchSource:0}: Error finding container 5de02cfffea61815b2ce515a1cbd21c3cd9a1e1a4a24f2d5c2a338039f91883c: Status 404 returned error can't find the container with id 5de02cfffea61815b2ce515a1cbd21c3cd9a1e1a4a24f2d5c2a338039f91883c Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.020522 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:55.520505379 +0000 UTC m=+143.431589175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.021004 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/af0798d9-84b9-49ce-8485-8b6efe5fcd42-tmpfs\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.021088 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-csi-data-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.021698 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-socket-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.023280 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24d0800-742d-4261-a6bc-f29ff2559c67-config\") pod \"service-ca-operator-777779d784-m8bnr\" (UID: \"f24d0800-742d-4261-a6bc-f29ff2559c67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.023440 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-images\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.023690 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/267d0ed6-2fab-4d5f-a007-dbb9856b45de-config-volume\") pod \"dns-default-j42gk\" (UID: \"267d0ed6-2fab-4d5f-a007-dbb9856b45de\") " pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.023971 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/393cb85f-5c30-42a2-94da-65eddc249939-etcd-service-ca\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.024499 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.024549 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-mountpoint-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.024897 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c799ff35-37f9-4fa7-b254-30b4936f6af8-signing-cabundle\") pod \"service-ca-9c57cc56f-n4mdl\" (UID: \"c799ff35-37f9-4fa7-b254-30b4936f6af8\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.024963 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/376b3099-2ce3-4c4c-b3c3-33cd426751aa-config\") pod \"kube-controller-manager-operator-78b949d7b-lhzvv\" (UID: \"376b3099-2ce3-4c4c-b3c3-33cd426751aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.025495 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-config-volume\") pod \"collect-profiles-29340405-tpnng\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.025841 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d69e9ea9-ae21-42ae-a6f6-845adda33d54-service-ca-bundle\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.026024 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/393cb85f-5c30-42a2-94da-65eddc249939-config\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.026193 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ce7d92-f8af-4edc-bbff-593f1c9329e1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hrvbw\" (UID: \"24ce7d92-f8af-4edc-bbff-593f1c9329e1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.026953 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s5659\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.027234 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f3b2066-2ea9-421b-863d-defe1f13cf46-registration-dir\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.028207 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/393cb85f-5c30-42a2-94da-65eddc249939-etcd-ca\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.028834 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d69e9ea9-ae21-42ae-a6f6-845adda33d54-stats-auth\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.029851 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d69e9ea9-ae21-42ae-a6f6-845adda33d54-metrics-certs\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.031282 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4dcc1fd-04a7-466a-babe-ee39ca4a44e0-cert\") pod \"ingress-canary-tbg6n\" (UID: \"c4dcc1fd-04a7-466a-babe-ee39ca4a44e0\") " pod="openshift-ingress-canary/ingress-canary-tbg6n" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.034167 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/267d0ed6-2fab-4d5f-a007-dbb9856b45de-metrics-tls\") pod \"dns-default-j42gk\" (UID: \"267d0ed6-2fab-4d5f-a007-dbb9856b45de\") " pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.036791 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/376b3099-2ce3-4c4c-b3c3-33cd426751aa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lhzvv\" (UID: \"376b3099-2ce3-4c4c-b3c3-33cd426751aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.036883 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s5659\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.038105 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76dbc61e-486b-4a3f-9d1d-6e310fe49672-profile-collector-cert\") pod \"catalog-operator-68c6474976-tt6rv\" (UID: \"76dbc61e-486b-4a3f-9d1d-6e310fe49672\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.038992 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1db7f396-e065-4bc0-92fa-712ccc5071e5-certs\") pod \"machine-config-server-qq72t\" (UID: \"1db7f396-e065-4bc0-92fa-712ccc5071e5\") " pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.039717 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/393cb85f-5c30-42a2-94da-65eddc249939-etcd-client\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.039849 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-secret-volume\") pod \"collect-profiles-29340405-tpnng\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.040316 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f24d0800-742d-4261-a6bc-f29ff2559c67-serving-cert\") pod \"service-ca-operator-777779d784-m8bnr\" (UID: \"f24d0800-742d-4261-a6bc-f29ff2559c67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.040880 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fb110f6-ed24-411b-a654-cd7aac0538b6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7kwd\" (UID: \"5fb110f6-ed24-411b-a654-cd7aac0538b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.040913 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1db7f396-e065-4bc0-92fa-712ccc5071e5-node-bootstrap-token\") pod \"machine-config-server-qq72t\" (UID: \"1db7f396-e065-4bc0-92fa-712ccc5071e5\") " pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.040986 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af0798d9-84b9-49ce-8485-8b6efe5fcd42-webhook-cert\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.043404 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-bound-sa-token\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.043731 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-proxy-tls\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.043932 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ce7d92-f8af-4edc-bbff-593f1c9329e1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hrvbw\" (UID: \"24ce7d92-f8af-4edc-bbff-593f1c9329e1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.044098 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c799ff35-37f9-4fa7-b254-30b4936f6af8-signing-key\") pod \"service-ca-9c57cc56f-n4mdl\" (UID: \"c799ff35-37f9-4fa7-b254-30b4936f6af8\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.044522 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d69e9ea9-ae21-42ae-a6f6-845adda33d54-default-certificate\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.044708 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af0798d9-84b9-49ce-8485-8b6efe5fcd42-apiservice-cert\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.046089 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/393cb85f-5c30-42a2-94da-65eddc249939-serving-cert\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.047867 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2caab735-0708-478f-83af-e87fe20f22b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cd2lf\" (UID: \"2caab735-0708-478f-83af-e87fe20f22b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.048815 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76pf9\" (UniqueName: \"kubernetes.io/projected/853301cc-6aa1-4599-bf5a-58b9014da5da-kube-api-access-76pf9\") pod \"machine-config-controller-84d6567774-k84rv\" (UID: \"853301cc-6aa1-4599-bf5a-58b9014da5da\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.049743 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76dbc61e-486b-4a3f-9d1d-6e310fe49672-srv-cert\") pod \"catalog-operator-68c6474976-tt6rv\" (UID: \"76dbc61e-486b-4a3f-9d1d-6e310fe49672\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.063000 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9s2\" (UniqueName: \"kubernetes.io/projected/9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33-kube-api-access-zn9s2\") pod \"dns-operator-744455d44c-rnqgw\" (UID: \"9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.066149 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qhldv"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.078155 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-htb67" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.111218 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p8p6\" (UniqueName: \"kubernetes.io/projected/20b4f849-7c8c-43f7-858a-1db881105d28-kube-api-access-8p8p6\") pod \"machine-api-operator-5694c8668f-248mp\" (UID: \"20b4f849-7c8c-43f7-858a-1db881105d28\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.113706 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.123029 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.123599 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:55.62358701 +0000 UTC m=+143.534670816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.128240 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5750534-5283-432b-8290-bc8215218d19-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.135529 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.136797 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.140114 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.141252 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m4grl"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.152454 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4z65\" (UniqueName: \"kubernetes.io/projected/5deb3681-f574-40e7-b0dd-3048d164de55-kube-api-access-q4z65\") pod \"oauth-openshift-558db77b4-9snxr\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.166750 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89rcw\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-kube-api-access-89rcw\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.189340 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-244cd\" (UniqueName: \"kubernetes.io/projected/2e40dad8-9b26-4337-9231-d7b952047695-kube-api-access-244cd\") pod \"multus-admission-controller-857f4d67dd-hzzx7\" (UID: \"2e40dad8-9b26-4337-9231-d7b952047695\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.220086 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.221579 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.223570 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4p8\" (UniqueName: \"kubernetes.io/projected/a5750534-5283-432b-8290-bc8215218d19-kube-api-access-7w4p8\") pod \"cluster-image-registry-operator-dc59b4c8b-7flnr\" (UID: \"a5750534-5283-432b-8290-bc8215218d19\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.224025 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.224128 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:55.724110948 +0000 UTC m=+143.635194754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.224403 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.224774 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:55.724767047 +0000 UTC m=+143.635850853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.227190 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8ww\" (UniqueName: \"kubernetes.io/projected/393cb85f-5c30-42a2-94da-65eddc249939-kube-api-access-kn8ww\") pod \"etcd-operator-b45778765-5g4zj\" (UID: \"393cb85f-5c30-42a2-94da-65eddc249939\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.245284 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b89pn\" (UniqueName: \"kubernetes.io/projected/c4dcc1fd-04a7-466a-babe-ee39ca4a44e0-kube-api-access-b89pn\") pod \"ingress-canary-tbg6n\" (UID: \"c4dcc1fd-04a7-466a-babe-ee39ca4a44e0\") " pod="openshift-ingress-canary/ingress-canary-tbg6n" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.267251 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/376b3099-2ce3-4c4c-b3c3-33cd426751aa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lhzvv\" (UID: \"376b3099-2ce3-4c4c-b3c3-33cd426751aa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.272501 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tbg6n" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.273150 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-htb67"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.285136 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gscfg\" (UniqueName: \"kubernetes.io/projected/21155744-ed89-42cc-b09d-cd8dd1896e6d-kube-api-access-gscfg\") pod \"marketplace-operator-79b997595-s5659\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.305958 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckjwn\" (UniqueName: \"kubernetes.io/projected/76dbc61e-486b-4a3f-9d1d-6e310fe49672-kube-api-access-ckjwn\") pod \"catalog-operator-68c6474976-tt6rv\" (UID: \"76dbc61e-486b-4a3f-9d1d-6e310fe49672\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:55 crc kubenswrapper[5058]: W1014 06:49:55.308421 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ff50c24_42d7_4716_bf86_a6bc553086d6.slice/crio-88125cae5004e9725fd3c1ccfd09c55a71741a1572cd788198dadd86e3cee8c2 WatchSource:0}: Error finding container 88125cae5004e9725fd3c1ccfd09c55a71741a1572cd788198dadd86e3cee8c2: Status 404 returned error can't find the container with id 88125cae5004e9725fd3c1ccfd09c55a71741a1572cd788198dadd86e3cee8c2 Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.318536 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.324091 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5ngq\" (UniqueName: \"kubernetes.io/projected/1db7f396-e065-4bc0-92fa-712ccc5071e5-kube-api-access-q5ngq\") pod \"machine-config-server-qq72t\" (UID: \"1db7f396-e065-4bc0-92fa-712ccc5071e5\") " pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.327178 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.327818 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:55.827785486 +0000 UTC m=+143.738869292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.346364 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6d98\" (UniqueName: \"kubernetes.io/projected/267d0ed6-2fab-4d5f-a007-dbb9856b45de-kube-api-access-d6d98\") pod \"dns-default-j42gk\" (UID: \"267d0ed6-2fab-4d5f-a007-dbb9856b45de\") " pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.369017 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdzms\" (UniqueName: \"kubernetes.io/projected/0f3b2066-2ea9-421b-863d-defe1f13cf46-kube-api-access-zdzms\") pod \"csi-hostpathplugin-j925t\" (UID: \"0f3b2066-2ea9-421b-863d-defe1f13cf46\") " pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.371239 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.384628 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkmr6\" (UniqueName: \"kubernetes.io/projected/f24d0800-742d-4261-a6bc-f29ff2559c67-kube-api-access-bkmr6\") pod \"service-ca-operator-777779d784-m8bnr\" (UID: \"f24d0800-742d-4261-a6bc-f29ff2559c67\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.390750 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.403704 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.409273 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.420310 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2nd\" (UniqueName: \"kubernetes.io/projected/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-kube-api-access-kb2nd\") pod \"collect-profiles-29340405-tpnng\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.426458 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.428404 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.428690 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:55.928679264 +0000 UTC m=+143.839763070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.433679 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54tfr\" (UniqueName: \"kubernetes.io/projected/af0798d9-84b9-49ce-8485-8b6efe5fcd42-kube-api-access-54tfr\") pod \"packageserver-d55dfcdfc-6chb4\" (UID: \"af0798d9-84b9-49ce-8485-8b6efe5fcd42\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.448709 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.459966 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6gvz\" (UniqueName: \"kubernetes.io/projected/bb740ddb-f795-4ed4-b94a-7bead0d054e7-kube-api-access-z6gvz\") pod \"migrator-59844c95c7-qmknz\" (UID: \"bb740ddb-f795-4ed4-b94a-7bead0d054e7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.470851 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t25vb\" (UniqueName: \"kubernetes.io/projected/d69e9ea9-ae21-42ae-a6f6-845adda33d54-kube-api-access-t25vb\") pod \"router-default-5444994796-pkv6l\" (UID: \"d69e9ea9-ae21-42ae-a6f6-845adda33d54\") " pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.480070 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.485584 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.493366 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hs7p\" (UniqueName: \"kubernetes.io/projected/c799ff35-37f9-4fa7-b254-30b4936f6af8-kube-api-access-9hs7p\") pod \"service-ca-9c57cc56f-n4mdl\" (UID: \"c799ff35-37f9-4fa7-b254-30b4936f6af8\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.500300 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.506522 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.510815 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdwk\" (UniqueName: \"kubernetes.io/projected/5fb110f6-ed24-411b-a654-cd7aac0538b6-kube-api-access-drdwk\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7kwd\" (UID: \"5fb110f6-ed24-411b-a654-cd7aac0538b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.514094 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.520818 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.525819 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrv7m\" (UniqueName: \"kubernetes.io/projected/2caab735-0708-478f-83af-e87fe20f22b8-kube-api-access-xrv7m\") pod \"package-server-manager-789f6589d5-cd2lf\" (UID: \"2caab735-0708-478f-83af-e87fe20f22b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.530117 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.532626 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.032231518 +0000 UTC m=+143.943315324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.532702 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.532877 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.533114 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.033103424 +0000 UTC m=+143.944187230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.534424 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j42gk" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.542083 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.554921 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cflg\" (UniqueName: \"kubernetes.io/projected/7d7c2e7a-6236-48e2-89ee-929a27a1f39f-kube-api-access-6cflg\") pod \"machine-config-operator-74547568cd-92kpm\" (UID: \"7d7c2e7a-6236-48e2-89ee-929a27a1f39f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.561988 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j925t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.577685 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24ce7d92-f8af-4edc-bbff-593f1c9329e1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hrvbw\" (UID: \"24ce7d92-f8af-4edc-bbff-593f1c9329e1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.579198 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qq72t" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.636997 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.637663 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.137646247 +0000 UTC m=+144.048730053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.643922 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rnqgw"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.652767 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7d7q2"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.712950 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" event={"ID":"b3abe1bf-4556-435c-9aa1-3f9a997f3842","Type":"ContainerStarted","Data":"f62f9289e5b26ae11d2fde8aa5a8459ed8a3e8e513be07ed2f5aaf8fcea6c222"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.713021 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" event={"ID":"b3abe1bf-4556-435c-9aa1-3f9a997f3842","Type":"ContainerStarted","Data":"5de02cfffea61815b2ce515a1cbd21c3cd9a1e1a4a24f2d5c2a338039f91883c"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.713246 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.722332 5058 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qh79q container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.722375 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" podUID="b3abe1bf-4556-435c-9aa1-3f9a997f3842" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.723920 5058 generic.go:334] "Generic (PLEG): container finished" podID="fccdb01a-b512-49e4-9b8f-9a777aea21b9" containerID="60de6db0fc57046e5a685e00f85043fa79d97a9e1d35795a6838d0bd026b5e98" exitCode=0 Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.723976 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" event={"ID":"fccdb01a-b512-49e4-9b8f-9a777aea21b9","Type":"ContainerDied","Data":"60de6db0fc57046e5a685e00f85043fa79d97a9e1d35795a6838d0bd026b5e98"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.731624 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" event={"ID":"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9","Type":"ContainerStarted","Data":"c8f3cf1594e4ade6f4486de57b8db88d7d781ac639495dd9c2990c8d944e880c"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.732429 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.734283 5058 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-79mpw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.734320 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" podUID="56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.734673 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qhldv" event={"ID":"60860626-d1e5-475c-a7aa-ac2a2f1e1593","Type":"ContainerStarted","Data":"cbf18a86b7aa07975819daa69d59efa7c9b0081cbb962c00839b2af0b8d87e8a"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.734720 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qhldv" event={"ID":"60860626-d1e5-475c-a7aa-ac2a2f1e1593","Type":"ContainerStarted","Data":"cae24581a94d22f19cc6296298793d3ef369f1c606f04e5f7e342de7dee529c7"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.735011 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.736523 5058 generic.go:334] "Generic (PLEG): container finished" podID="8be2923f-db3e-40bf-b821-27e0d8ab84ae" containerID="d36d69a1863153a2a2329caba2ed68fe351cea42dc3b29f0d2d2edff7353aa5b" exitCode=0 Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.736597 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" event={"ID":"8be2923f-db3e-40bf-b821-27e0d8ab84ae","Type":"ContainerDied","Data":"d36d69a1863153a2a2329caba2ed68fe351cea42dc3b29f0d2d2edff7353aa5b"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.738225 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.738499 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.238488603 +0000 UTC m=+144.149572409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.740092 5058 patch_prober.go:28] interesting pod/console-operator-58897d9998-qhldv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.740139 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qhldv" podUID="60860626-d1e5-475c-a7aa-ac2a2f1e1593" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.746078 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.749703 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" event={"ID":"6773ec1d-4c68-45d1-ac48-50a04ece6c27","Type":"ContainerStarted","Data":"505632849c0e9a95ea8bbd1a6c081c1557d1edbd81f07bc3c2bff6d0eef86809"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.749775 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" event={"ID":"6773ec1d-4c68-45d1-ac48-50a04ece6c27","Type":"ContainerStarted","Data":"3c3b8ce4f8b674c1b474f04f6c988eba87ed077e15081be91a4fce163ab4ce59"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.753217 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.755369 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" event={"ID":"853301cc-6aa1-4599-bf5a-58b9014da5da","Type":"ContainerStarted","Data":"46a0f5a760c076733308847996caa6741b814615da61f29b97a2a4bec8cfd0fc"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.758663 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.764720 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.764738 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" event={"ID":"e09468c0-bfc1-42fe-918d-4068041806c0","Type":"ContainerStarted","Data":"568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.765431 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.767445 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-htb67" event={"ID":"5ff50c24-42d7-4716-bf86-a6bc553086d6","Type":"ContainerStarted","Data":"33576b7a7db2663bce5d95399e3c36919247aa9074450ea690ba097dcdc836c9"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.767486 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-htb67" event={"ID":"5ff50c24-42d7-4716-bf86-a6bc553086d6","Type":"ContainerStarted","Data":"88125cae5004e9725fd3c1ccfd09c55a71741a1572cd788198dadd86e3cee8c2"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.768379 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-htb67" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.773470 5058 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mdp6d container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.773515 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" podUID="e09468c0-bfc1-42fe-918d-4068041806c0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.773470 5058 patch_prober.go:28] interesting pod/downloads-7954f5f757-htb67 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.773565 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-htb67" podUID="5ff50c24-42d7-4716-bf86-a6bc553086d6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.773783 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.774956 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" event={"ID":"7b3707a5-688b-4f49-ba57-4159b9809f0c","Type":"ContainerStarted","Data":"97f01d180fa8b4c3606444f77f378bb78c9ec4c07cb492166542c7afb86a1ec2"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.774989 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" event={"ID":"7b3707a5-688b-4f49-ba57-4159b9809f0c","Type":"ContainerStarted","Data":"5dc29508c3930f7e3497e9106117700b03733ef0fe16e448c7896501c7f89844"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.776220 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" event={"ID":"92274d4b-dd07-43a9-b763-39052f342669","Type":"ContainerStarted","Data":"3c21dff026b224d92b4e8c1fdfd9ecab85b084a797779c34d2298e87e48139e8"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.776243 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" event={"ID":"92274d4b-dd07-43a9-b763-39052f342669","Type":"ContainerStarted","Data":"d27ee23547258c64137c53f97639cf16218e8a0270dd0a912cd624ec1825f2f5"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.777880 5058 generic.go:334] "Generic (PLEG): container finished" podID="cb29cc9b-9890-4f0d-a4ca-bfc93dce8628" containerID="623ce5644b21188eb10fbb63a37ed938034fe069d141127250dd4d1173bd0ff1" exitCode=0 Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.777935 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" event={"ID":"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628","Type":"ContainerDied","Data":"623ce5644b21188eb10fbb63a37ed938034fe069d141127250dd4d1173bd0ff1"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.777964 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" event={"ID":"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628","Type":"ContainerStarted","Data":"74632561c021b3667eb7cf260a535e8bc9eabd112847568745e0bd9e391eec3b"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.787016 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" event={"ID":"33215a89-847d-416d-9121-ad7a4e3ae01c","Type":"ContainerStarted","Data":"31d97035c6c3c9a096e244498474179afc0b65baef17ae2091e0b5093d122e53"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.787071 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" event={"ID":"33215a89-847d-416d-9121-ad7a4e3ae01c","Type":"ContainerStarted","Data":"eda0fef74165d84c043ffc84aa424d6414f6b0dddd351e4345e27401851556e0"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.795683 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.796030 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" event={"ID":"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc","Type":"ContainerStarted","Data":"dd904354a8c52b55eb57122cab107ca20fbf9dc09f1e785c9fb4c7e2c8d2a04e"} Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.849406 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.850075 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.350055399 +0000 UTC m=+144.261139205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.918595 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tbg6n"] Oct 14 06:49:55 crc kubenswrapper[5058]: I1014 06:49:55.953298 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:55 crc kubenswrapper[5058]: E1014 06:49:55.954485 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.454470798 +0000 UTC m=+144.365554604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.058510 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.058836 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.558820426 +0000 UTC m=+144.469904232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.074581 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-248mp"] Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.080987 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr"] Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.084070 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9snxr"] Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.159724 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.160341 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.660329702 +0000 UTC m=+144.571413508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.173987 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hzzx7"] Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.185019 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9hl8v" podStartSLOduration=123.184996533 podStartE2EDuration="2m3.184996533s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:56.184521289 +0000 UTC m=+144.095605095" watchObservedRunningTime="2025-10-14 06:49:56.184996533 +0000 UTC m=+144.096080339" Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.259639 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qhldv" podStartSLOduration=123.259613343 podStartE2EDuration="2m3.259613343s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:56.258714687 +0000 UTC m=+144.169798493" watchObservedRunningTime="2025-10-14 06:49:56.259613343 +0000 UTC m=+144.170697149" Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.261431 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.262027 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.762006692 +0000 UTC m=+144.673090498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.397630 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.398829 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:56.898796775 +0000 UTC m=+144.809880581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.414361 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" podStartSLOduration=123.414346693 podStartE2EDuration="2m3.414346693s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:56.413783117 +0000 UTC m=+144.324866923" watchObservedRunningTime="2025-10-14 06:49:56.414346693 +0000 UTC m=+144.325430499" Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.512014 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.512137 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.012105641 +0000 UTC m=+144.923189447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.512436 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.512889 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.012875773 +0000 UTC m=+144.923959579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.526776 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2lt5" podStartSLOduration=123.526757923 podStartE2EDuration="2m3.526757923s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:56.525565519 +0000 UTC m=+144.436649335" watchObservedRunningTime="2025-10-14 06:49:56.526757923 +0000 UTC m=+144.437841729" Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.573280 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s5659"] Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.616653 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.616917 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.116902561 +0000 UTC m=+145.027986367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.681526 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kxhxc" podStartSLOduration=123.681507873 podStartE2EDuration="2m3.681507873s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:56.626475377 +0000 UTC m=+144.537559183" watchObservedRunningTime="2025-10-14 06:49:56.681507873 +0000 UTC m=+144.592591679" Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.683282 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" podStartSLOduration=122.683271524 podStartE2EDuration="2m2.683271524s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:56.681217595 +0000 UTC m=+144.592301401" watchObservedRunningTime="2025-10-14 06:49:56.683271524 +0000 UTC m=+144.594355340" Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.734319 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.736768 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.236747216 +0000 UTC m=+145.147831042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.824468 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jl7h6" podStartSLOduration=122.824454353 podStartE2EDuration="2m2.824454353s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:56.781680881 +0000 UTC m=+144.692764687" watchObservedRunningTime="2025-10-14 06:49:56.824454353 +0000 UTC m=+144.735538159" Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.836393 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.836809 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.336779469 +0000 UTC m=+145.247863275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.903195 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fhbms" podStartSLOduration=123.903179412 podStartE2EDuration="2m3.903179412s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:56.901976548 +0000 UTC m=+144.813060354" watchObservedRunningTime="2025-10-14 06:49:56.903179412 +0000 UTC m=+144.814263218" Oct 14 06:49:56 crc kubenswrapper[5058]: I1014 06:49:56.937377 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:56 crc kubenswrapper[5058]: E1014 06:49:56.937670 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.437642676 +0000 UTC m=+145.348726482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.054408 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.060252 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.560232309 +0000 UTC m=+145.471316115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.064232 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.067220 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.56720564 +0000 UTC m=+145.478289446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.098448 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" event={"ID":"7b3707a5-688b-4f49-ba57-4159b9809f0c","Type":"ContainerStarted","Data":"a02dd805dd3dff004eceace3b2d80886bd0f499e722678062bf84c478c23c1e0"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.098493 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tbg6n" event={"ID":"c4dcc1fd-04a7-466a-babe-ee39ca4a44e0","Type":"ContainerStarted","Data":"89fa4966987621d5626bd29bfcbf7d020bd955170b87215c15736e4421738661"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.098503 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qq72t" event={"ID":"1db7f396-e065-4bc0-92fa-712ccc5071e5","Type":"ContainerStarted","Data":"d95c4bcdd95455c2823268645fe9a3b94e94e232d5ce12491d7556ee2a1b8b91"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.098516 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qq72t" event={"ID":"1db7f396-e065-4bc0-92fa-712ccc5071e5","Type":"ContainerStarted","Data":"3409bf61edf2c8f07eb9648a95182135391a9615e128808f4344f04b5e928741"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.098527 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" event={"ID":"21155744-ed89-42cc-b09d-cd8dd1896e6d","Type":"ContainerStarted","Data":"1d372d2df92dc30d8a9158b50037685972db1f11e470bf3b88916a7a2612b889"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.098535 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" event={"ID":"5deb3681-f574-40e7-b0dd-3048d164de55","Type":"ContainerStarted","Data":"4062158c22a4c9e7040f654dc1ff311ab919f762809f362d76625620ea1d6e89"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.098547 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" event={"ID":"853301cc-6aa1-4599-bf5a-58b9014da5da","Type":"ContainerStarted","Data":"b1916458650e34f45b2fbdb5aa1e7dd40b8e7f0ae3d368f4c6a215d74caf8fd5"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.098559 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pkv6l" event={"ID":"d69e9ea9-ae21-42ae-a6f6-845adda33d54","Type":"ContainerStarted","Data":"0080564c6c984d781b2d0f64abc8957646d6f2dfc47daf5ca45450c7f68b2d7b"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.098576 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pkv6l" event={"ID":"d69e9ea9-ae21-42ae-a6f6-845adda33d54","Type":"ContainerStarted","Data":"88a5328069c880fcad01ed602c3564e80736df2227fbfa0db61a8c005bba16c7"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.099308 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" podStartSLOduration=123.099294295 podStartE2EDuration="2m3.099294295s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:57.029503893 +0000 UTC m=+144.940587699" watchObservedRunningTime="2025-10-14 06:49:57.099294295 +0000 UTC m=+145.010378101" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.138690 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" event={"ID":"2e40dad8-9b26-4337-9231-d7b952047695","Type":"ContainerStarted","Data":"d06661c1e82f5745b6c212b841b28c5a9c5b8f30bcaaf66de4efa9a91f2ea427"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.164894 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.165342 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.665314028 +0000 UTC m=+145.576397824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.165458 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.166975 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.666958415 +0000 UTC m=+145.578042221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.200641 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" event={"ID":"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9","Type":"ContainerStarted","Data":"49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.256900 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.268327 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.269488 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.76947272 +0000 UTC m=+145.680556526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.299449 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" event={"ID":"c36bc5ec-df54-4b0c-84c8-1b1d64eb91dc","Type":"ContainerStarted","Data":"9952a89f646c5b417aca58ef763c41b15495b5539b10599495645fe0c7210845"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.314352 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n4mdl"] Oct 14 06:49:57 crc kubenswrapper[5058]: W1014 06:49:57.347614 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc799ff35_37f9_4fa7_b254_30b4936f6af8.slice/crio-4c953dc8e5ccd2077e6cc21b841b1bf3015bc2d70fb6b9b1fbef22d2b263ca51 WatchSource:0}: Error finding container 4c953dc8e5ccd2077e6cc21b841b1bf3015bc2d70fb6b9b1fbef22d2b263ca51: Status 404 returned error can't find the container with id 4c953dc8e5ccd2077e6cc21b841b1bf3015bc2d70fb6b9b1fbef22d2b263ca51 Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.375837 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.376996 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr"] Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.377278 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.877264597 +0000 UTC m=+145.788348403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.433339 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" event={"ID":"ee1b5ef4-b161-48dd-9e71-6217b04acf89","Type":"ContainerStarted","Data":"fab480c47137c5c487afd312ebdc457d23990f1ef8fb6f6f94c84345784a43ad"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.433612 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" event={"ID":"ee1b5ef4-b161-48dd-9e71-6217b04acf89","Type":"ContainerStarted","Data":"4c9b3f4eecb92c0b38c50b9ecb32c52e85b20898cb5156a84593bb8ca18e03fb"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.448489 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j925t"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.461493 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfggz" podStartSLOduration=123.461475624 podStartE2EDuration="2m3.461475624s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:57.450545479 +0000 UTC m=+145.361629295" watchObservedRunningTime="2025-10-14 06:49:57.461475624 +0000 UTC m=+145.372559430" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.473030 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" event={"ID":"20b4f849-7c8c-43f7-858a-1db881105d28","Type":"ContainerStarted","Data":"11788d33603c652a5ef522f9d671404556e75942f27350499d66b7504e68fd77"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.478350 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.479185 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:57.979169324 +0000 UTC m=+145.890253130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.489117 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.546997 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5g4zj"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.548001 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" event={"ID":"9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33","Type":"ContainerStarted","Data":"102aabaa1f9cac44f5065490bdb458a65551e1421e9345270d4455654f58f7c0"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.548041 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" event={"ID":"9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33","Type":"ContainerStarted","Data":"07c8ec6dc3ca23f46ff6755fb926b301520a2f95f6f4f07c585bfa845ff398a8"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.556870 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j42gk"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.574188 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" event={"ID":"8be2923f-db3e-40bf-b821-27e0d8ab84ae","Type":"ContainerStarted","Data":"ff0d5181b7d02805a466ca38553528dcab142b882bd387bf6098690bd9244adb"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.574973 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.582206 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.583385 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:58.083373367 +0000 UTC m=+145.994457173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.633073 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" event={"ID":"a5750534-5283-432b-8290-bc8215218d19","Type":"ContainerStarted","Data":"ffe856e2f559e363c402a6feb57120819b007b516bb05e8d4cd58bff781f9e85"} Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.635564 5058 patch_prober.go:28] interesting pod/downloads-7954f5f757-htb67 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.635592 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-htb67" podUID="5ff50c24-42d7-4716-bf86-a6bc553086d6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.641737 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qhldv" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.646838 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qh79q" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.653894 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.665772 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.682851 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.683233 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:58.183189964 +0000 UTC m=+146.094273770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.697866 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-htb67" podStartSLOduration=124.697846867 podStartE2EDuration="2m4.697846867s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:57.696097046 +0000 UTC m=+145.607180862" watchObservedRunningTime="2025-10-14 06:49:57.697846867 +0000 UTC m=+145.608930673" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.747508 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.755388 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.767453 5058 patch_prober.go:28] interesting pod/router-default-5444994796-pkv6l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 06:49:57 crc kubenswrapper[5058]: [-]has-synced failed: reason withheld Oct 14 06:49:57 crc kubenswrapper[5058]: [+]process-running ok Oct 14 06:49:57 crc kubenswrapper[5058]: healthz check failed Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.767495 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pkv6l" podUID="d69e9ea9-ae21-42ae-a6f6-845adda33d54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.784576 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.804119 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:58.304099499 +0000 UTC m=+146.215183305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.887612 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.887992 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:58.387976067 +0000 UTC m=+146.299059873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.903343 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.924452 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw"] Oct 14 06:49:57 crc kubenswrapper[5058]: W1014 06:49:57.953898 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf0798d9_84b9_49ce_8485_8b6efe5fcd42.slice/crio-3988a2eb68456cad647de866346dc7b2568e5265094d962b766f7b22fdb484f9 WatchSource:0}: Error finding container 3988a2eb68456cad647de866346dc7b2568e5265094d962b766f7b22fdb484f9: Status 404 returned error can't find the container with id 3988a2eb68456cad647de866346dc7b2568e5265094d962b766f7b22fdb484f9 Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.962942 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.964810 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.974984 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz"] Oct 14 06:49:57 crc kubenswrapper[5058]: I1014 06:49:57.989990 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:57 crc kubenswrapper[5058]: E1014 06:49:57.990401 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:58.490386618 +0000 UTC m=+146.401470424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.015962 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r7sc7" podStartSLOduration=124.015943225 podStartE2EDuration="2m4.015943225s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:58.002694253 +0000 UTC m=+145.913778059" watchObservedRunningTime="2025-10-14 06:49:58.015943225 +0000 UTC m=+145.927027031" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.038161 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd"] Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.057828 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qq72t" podStartSLOduration=6.057788271 podStartE2EDuration="6.057788271s" podCreationTimestamp="2025-10-14 06:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:58.043108488 +0000 UTC m=+145.954192294" watchObservedRunningTime="2025-10-14 06:49:58.057788271 +0000 UTC m=+145.968872077" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.092773 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:58 crc kubenswrapper[5058]: E1014 06:49:58.093134 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:58.593117249 +0000 UTC m=+146.504201055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.105914 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7d7q2" podStartSLOduration=124.105893408 podStartE2EDuration="2m4.105893408s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:58.102185131 +0000 UTC m=+146.013268947" watchObservedRunningTime="2025-10-14 06:49:58.105893408 +0000 UTC m=+146.016977214" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.190041 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" podStartSLOduration=124.190024963 podStartE2EDuration="2m4.190024963s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:58.189054185 +0000 UTC m=+146.100137991" watchObservedRunningTime="2025-10-14 06:49:58.190024963 +0000 UTC m=+146.101108769" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.190380 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" podStartSLOduration=124.190375743 podStartE2EDuration="2m4.190375743s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:58.14240445 +0000 UTC m=+146.053488256" watchObservedRunningTime="2025-10-14 06:49:58.190375743 +0000 UTC m=+146.101459549" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.195013 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:58 crc kubenswrapper[5058]: E1014 06:49:58.195332 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:58.695320105 +0000 UTC m=+146.606403911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.226305 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pkv6l" podStartSLOduration=124.226291088 podStartE2EDuration="2m4.226291088s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:58.225167025 +0000 UTC m=+146.136250841" watchObservedRunningTime="2025-10-14 06:49:58.226291088 +0000 UTC m=+146.137374894" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.281772 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xsms7" podStartSLOduration=124.281757516 podStartE2EDuration="2m4.281757516s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:58.281730056 +0000 UTC m=+146.192813862" watchObservedRunningTime="2025-10-14 06:49:58.281757516 +0000 UTC m=+146.192841322" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.296918 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:58 crc kubenswrapper[5058]: E1014 06:49:58.297459 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:58.797438958 +0000 UTC m=+146.708522764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.313715 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" podStartSLOduration=125.313697537 podStartE2EDuration="2m5.313697537s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:58.313302076 +0000 UTC m=+146.224385882" watchObservedRunningTime="2025-10-14 06:49:58.313697537 +0000 UTC m=+146.224781333" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.402526 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:58 crc kubenswrapper[5058]: E1014 06:49:58.402885 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:58.902873357 +0000 UTC m=+146.813957163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.506776 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:58 crc kubenswrapper[5058]: E1014 06:49:58.507823 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.00778682 +0000 UTC m=+146.918870626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.618167 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:58 crc kubenswrapper[5058]: E1014 06:49:58.618589 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.118571393 +0000 UTC m=+147.029655199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.679881 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7flnr" event={"ID":"a5750534-5283-432b-8290-bc8215218d19","Type":"ContainerStarted","Data":"d1655bc01b7c9576b7b069af8dc20aec11c5cb274a02cf0aab0bfac3bc1d6e7b"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.719325 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:58 crc kubenswrapper[5058]: E1014 06:49:58.719680 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.219665317 +0000 UTC m=+147.130749123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.725355 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" event={"ID":"376b3099-2ce3-4c4c-b3c3-33cd426751aa","Type":"ContainerStarted","Data":"a596fcbeeda5695f1b1928b2eba969d1313de526cd50112547f526e6ec8a3278"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.739057 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" event={"ID":"393cb85f-5c30-42a2-94da-65eddc249939","Type":"ContainerStarted","Data":"5530a30b3413770cd52a5ec3f86d801e7cd89ef858ff1b66c52632620ba4323c"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.758996 5058 patch_prober.go:28] interesting pod/router-default-5444994796-pkv6l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 06:49:58 crc kubenswrapper[5058]: [-]has-synced failed: reason withheld Oct 14 06:49:58 crc kubenswrapper[5058]: [+]process-running ok Oct 14 06:49:58 crc kubenswrapper[5058]: healthz check failed Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.759046 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pkv6l" podUID="d69e9ea9-ae21-42ae-a6f6-845adda33d54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.763561 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" event={"ID":"5deb3681-f574-40e7-b0dd-3048d164de55","Type":"ContainerStarted","Data":"41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.764499 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.770348 5058 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-9snxr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.770411 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" podUID="5deb3681-f574-40e7-b0dd-3048d164de55" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.786741 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j925t" event={"ID":"0f3b2066-2ea9-421b-863d-defe1f13cf46","Type":"ContainerStarted","Data":"082f7d8b9f17312ced0538b60e9d72f22d9ce86230b17f57d36664f2b52dcb98"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.821232 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:58 crc kubenswrapper[5058]: E1014 06:49:58.822969 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.322958524 +0000 UTC m=+147.234042330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.868454 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz" event={"ID":"bb740ddb-f795-4ed4-b94a-7bead0d054e7","Type":"ContainerStarted","Data":"a24a420176982f1d86a0fb03629eb3fe72545daa59c9382bd34b60004e7e3024"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.883463 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" event={"ID":"2e40dad8-9b26-4337-9231-d7b952047695","Type":"ContainerStarted","Data":"0b947c0ab738cc51e6922ebb00660e22440c38bc953c537ff4bb28d1b8ab33b4"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.927260 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:58 crc kubenswrapper[5058]: E1014 06:49:58.928169 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.428153386 +0000 UTC m=+147.339237192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.929067 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" event={"ID":"853301cc-6aa1-4599-bf5a-58b9014da5da","Type":"ContainerStarted","Data":"f73acca9c27a2a93902836c5bb32125f58f84c9013c8dd1b67938580d8263829"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.942868 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" event={"ID":"9f5ba2dc-b2b8-4727-a75a-3bbb9d982a33","Type":"ContainerStarted","Data":"4b9a3ed07f08fe9d88c69995197aebe792dfe0d72251089b907a9495395c3759"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.950449 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" event={"ID":"2caab735-0708-478f-83af-e87fe20f22b8","Type":"ContainerStarted","Data":"1a69cff12a098415bd4116e6ab8fdee33e0a0327b405218b99f94c23adc799ad"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.973647 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" event={"ID":"21155744-ed89-42cc-b09d-cd8dd1896e6d","Type":"ContainerStarted","Data":"3bcc1f8873dcead744dc617d7f831f70b9bcc752ad19d7e4a49ec5695d5fc584"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.974646 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.976743 5058 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-s5659 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.976773 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" podUID="21155744-ed89-42cc-b09d-cd8dd1896e6d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.990025 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" event={"ID":"20b4f849-7c8c-43f7-858a-1db881105d28","Type":"ContainerStarted","Data":"b1ddf5c210cf797548f427a16e0568906d364e4878794ce81120062fc94fbd6e"} Oct 14 06:49:58 crc kubenswrapper[5058]: I1014 06:49:58.990061 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" event={"ID":"20b4f849-7c8c-43f7-858a-1db881105d28","Type":"ContainerStarted","Data":"daffea7635c1a6b92d95243bbd894393b8e3a2565538a3b80a72c0ff2bc03e8a"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.004287 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" event={"ID":"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057","Type":"ContainerStarted","Data":"7be7bdee3f79c3c2070935c0f70b7595231101391214a31adfee54b5118d8568"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.004330 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" event={"ID":"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057","Type":"ContainerStarted","Data":"6a11b5c72756237174adafaab2e4c5d08dae7d49c44bfd98f5b84a8194fa5161"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.012870 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" event={"ID":"76dbc61e-486b-4a3f-9d1d-6e310fe49672","Type":"ContainerStarted","Data":"029b76b6963cc0daca3063902f85f490d91bf2aafc785fce78d6b3cbccc5eb6b"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.012909 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" event={"ID":"76dbc61e-486b-4a3f-9d1d-6e310fe49672","Type":"ContainerStarted","Data":"74065ee5a10099a957fa6d462712d1e11f327ffd3ce3a34685a84d100b2d4958"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.013536 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.020703 5058 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tt6rv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.020773 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" podUID="76dbc61e-486b-4a3f-9d1d-6e310fe49672" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.028692 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.029607 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.52958467 +0000 UTC m=+147.440668476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.031180 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" event={"ID":"5fb110f6-ed24-411b-a654-cd7aac0538b6","Type":"ContainerStarted","Data":"6130802f4c32a221a6de0dd1ff7ae3fb3b3a3060716c56376115cac53d70268e"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.046007 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" event={"ID":"24ce7d92-f8af-4edc-bbff-593f1c9329e1","Type":"ContainerStarted","Data":"d4a3337b05662e845f697f38ffd9d68c7d096d52e4cf24cb08fb30a810515e6d"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.055163 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.055403 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.083874 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tbg6n" event={"ID":"c4dcc1fd-04a7-466a-babe-ee39ca4a44e0","Type":"ContainerStarted","Data":"6ff6c462f8f3f3f564ec4401ee8bce14a4cfbd46c24e46581f9de2ff3c328ded"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.094600 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" podStartSLOduration=125.094584573 podStartE2EDuration="2m5.094584573s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.092497123 +0000 UTC m=+147.003580929" watchObservedRunningTime="2025-10-14 06:49:59.094584573 +0000 UTC m=+147.005668379" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.114345 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" event={"ID":"7d7c2e7a-6236-48e2-89ee-929a27a1f39f","Type":"ContainerStarted","Data":"1763ea0abb587373d61d4d4035fb7825b732f030c52be2360a82e6d5a404f042"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.127033 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" event={"ID":"f24d0800-742d-4261-a6bc-f29ff2559c67","Type":"ContainerStarted","Data":"c8dfac6467428dbbd9e1a9bfafd81a66817a6c012de1393e6787d159e1fcde09"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.127332 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" event={"ID":"f24d0800-742d-4261-a6bc-f29ff2559c67","Type":"ContainerStarted","Data":"f1af772d0309f8ef477817db29e99591d906beddbfe721fab9d4459af6f590a0"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.129214 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.129547 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.62952283 +0000 UTC m=+147.540606696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.129697 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.131417 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.631404594 +0000 UTC m=+147.542488400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.139722 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-248mp" podStartSLOduration=125.139704264 podStartE2EDuration="2m5.139704264s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.135255155 +0000 UTC m=+147.046338961" watchObservedRunningTime="2025-10-14 06:49:59.139704264 +0000 UTC m=+147.050788070" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.146040 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" event={"ID":"c799ff35-37f9-4fa7-b254-30b4936f6af8","Type":"ContainerStarted","Data":"a2714de6fe61dc974b0b735aa0384ce1599edba680ffc58dcf88c3e6b49867ef"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.146236 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" event={"ID":"c799ff35-37f9-4fa7-b254-30b4936f6af8","Type":"ContainerStarted","Data":"4c953dc8e5ccd2077e6cc21b841b1bf3015bc2d70fb6b9b1fbef22d2b263ca51"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.176416 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k84rv" podStartSLOduration=125.176400241 podStartE2EDuration="2m5.176400241s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.17496926 +0000 UTC m=+147.086053066" watchObservedRunningTime="2025-10-14 06:49:59.176400241 +0000 UTC m=+147.087484047" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.187847 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j42gk" event={"ID":"267d0ed6-2fab-4d5f-a007-dbb9856b45de","Type":"ContainerStarted","Data":"3acb57841f6845d9fc8e774ca11ad9ef2da1b1f1bae1a2bedd000119ba308701"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.200182 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" event={"ID":"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628","Type":"ContainerStarted","Data":"3cf3b924d6f7dda6a47787f0f07ce6bb7ac4d47d2313533745d9a2a51ea7f80b"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.200230 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" event={"ID":"cb29cc9b-9890-4f0d-a4ca-bfc93dce8628","Type":"ContainerStarted","Data":"75ef7374c7b8abf6d78c5157ad035fe003732baa1902df09201f9891193aeae8"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.240237 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.240301 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" event={"ID":"fccdb01a-b512-49e4-9b8f-9a777aea21b9","Type":"ContainerStarted","Data":"42c202261545e65f8cb9c5b329f03619f7d60fd87cf5509a2717172dbcf2006c"} Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.241241 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.741214389 +0000 UTC m=+147.652298195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.250326 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" podStartSLOduration=125.250308441 podStartE2EDuration="2m5.250308441s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.247262494 +0000 UTC m=+147.158346320" watchObservedRunningTime="2025-10-14 06:49:59.250308441 +0000 UTC m=+147.161392247" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.266081 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" event={"ID":"af0798d9-84b9-49ce-8485-8b6efe5fcd42","Type":"ContainerStarted","Data":"3988a2eb68456cad647de866346dc7b2568e5265094d962b766f7b22fdb484f9"} Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.287704 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.287843 5058 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6chb4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.287923 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" podUID="af0798d9-84b9-49ce-8485-8b6efe5fcd42" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.312623 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" podStartSLOduration=125.312600047 podStartE2EDuration="2m5.312600047s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.308316673 +0000 UTC m=+147.219400489" watchObservedRunningTime="2025-10-14 06:49:59.312600047 +0000 UTC m=+147.223683853" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.341338 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rnqgw" podStartSLOduration=126.341321765 podStartE2EDuration="2m6.341321765s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.340886722 +0000 UTC m=+147.251970538" watchObservedRunningTime="2025-10-14 06:49:59.341321765 +0000 UTC m=+147.252405571" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.344591 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.355174 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.855156794 +0000 UTC m=+147.766240600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.362223 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.404189 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" podStartSLOduration=125.404172546 podStartE2EDuration="2m5.404172546s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.403387024 +0000 UTC m=+147.314470830" watchObservedRunningTime="2025-10-14 06:49:59.404172546 +0000 UTC m=+147.315256352" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.451902 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.452264 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:49:59.952248772 +0000 UTC m=+147.863332578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.555171 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.555776 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.055765265 +0000 UTC m=+147.966849071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.586403 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n4mdl" podStartSLOduration=125.586388758 podStartE2EDuration="2m5.586388758s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.585368829 +0000 UTC m=+147.496452635" watchObservedRunningTime="2025-10-14 06:49:59.586388758 +0000 UTC m=+147.497472554" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.586574 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" podStartSLOduration=125.586570763 podStartE2EDuration="2m5.586570763s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.456183775 +0000 UTC m=+147.367267571" watchObservedRunningTime="2025-10-14 06:49:59.586570763 +0000 UTC m=+147.497654569" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.647773 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" podStartSLOduration=125.647758737 podStartE2EDuration="2m5.647758737s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.619877113 +0000 UTC m=+147.530960919" watchObservedRunningTime="2025-10-14 06:49:59.647758737 +0000 UTC m=+147.558842533" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.648875 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" podStartSLOduration=125.648868009 podStartE2EDuration="2m5.648868009s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.646092949 +0000 UTC m=+147.557176765" watchObservedRunningTime="2025-10-14 06:49:59.648868009 +0000 UTC m=+147.559951815" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.656244 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.656529 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.156512069 +0000 UTC m=+148.067595875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.743297 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" podStartSLOduration=125.74327874 podStartE2EDuration="2m5.74327874s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.715664244 +0000 UTC m=+147.626748060" watchObservedRunningTime="2025-10-14 06:49:59.74327874 +0000 UTC m=+147.654362536" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.760446 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.760853 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.260838266 +0000 UTC m=+148.171922072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.765060 5058 patch_prober.go:28] interesting pod/router-default-5444994796-pkv6l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 06:49:59 crc kubenswrapper[5058]: [-]has-synced failed: reason withheld Oct 14 06:49:59 crc kubenswrapper[5058]: [+]process-running ok Oct 14 06:49:59 crc kubenswrapper[5058]: healthz check failed Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.765101 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pkv6l" podUID="d69e9ea9-ae21-42ae-a6f6-845adda33d54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.807582 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-m8bnr" podStartSLOduration=125.807565843 podStartE2EDuration="2m5.807565843s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.806981656 +0000 UTC m=+147.718065462" watchObservedRunningTime="2025-10-14 06:49:59.807565843 +0000 UTC m=+147.718649649" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.808880 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tbg6n" podStartSLOduration=7.808874261 podStartE2EDuration="7.808874261s" podCreationTimestamp="2025-10-14 06:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:49:59.747221154 +0000 UTC m=+147.658304960" watchObservedRunningTime="2025-10-14 06:49:59.808874261 +0000 UTC m=+147.719958067" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.861340 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.861506 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.361480747 +0000 UTC m=+148.272564553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.861590 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.861941 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.361934 +0000 UTC m=+148.273017806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.962923 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.963108 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.463080815 +0000 UTC m=+148.374164621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.963545 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:49:59 crc kubenswrapper[5058]: E1014 06:49:59.963862 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.463854508 +0000 UTC m=+148.374938314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.972289 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:49:59 crc kubenswrapper[5058]: I1014 06:49:59.972347 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.064367 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.064556 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.564530369 +0000 UTC m=+148.475614165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.064635 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.064963 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.564951482 +0000 UTC m=+148.476035288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.147285 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-92kq5" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.165548 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.165732 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.665705036 +0000 UTC m=+148.576788842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.165816 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.166133 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.666125648 +0000 UTC m=+148.577209444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.266827 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.267065 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.767041146 +0000 UTC m=+148.678124952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.267127 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.267401 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.767394476 +0000 UTC m=+148.678478282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.287963 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" event={"ID":"7d7c2e7a-6236-48e2-89ee-929a27a1f39f","Type":"ContainerStarted","Data":"d4368f145bd0b381758683f852369ec2c7af747fb0bf03fe8ac820a77d82027a"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.288011 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" event={"ID":"7d7c2e7a-6236-48e2-89ee-929a27a1f39f","Type":"ContainerStarted","Data":"a862cf54b3dcd71cb75c32479a48045da7ef8cf04b64dadf9a18c57352a341ef"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.289930 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j925t" event={"ID":"0f3b2066-2ea9-421b-863d-defe1f13cf46","Type":"ContainerStarted","Data":"c2b14837711aaccf239b866691ba69b5fb700b7864f07d27fa6e65f2d7a82025"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.308268 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j42gk" event={"ID":"267d0ed6-2fab-4d5f-a007-dbb9856b45de","Type":"ContainerStarted","Data":"fec7d65b3f95564cade27bea7c3925855402c2fa215152bf4a7d1a89b43bf54f"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.308316 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j42gk" event={"ID":"267d0ed6-2fab-4d5f-a007-dbb9856b45de","Type":"ContainerStarted","Data":"3fd7f5180b1e89ad97abd614cff18248381044119b10ab9ce1b27b000e615541"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.308385 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-j42gk" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.316194 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" event={"ID":"2caab735-0708-478f-83af-e87fe20f22b8","Type":"ContainerStarted","Data":"32be0240c301e6dd628173df1a5c9487fa75ef14dcb826b63dc78df6b24d843d"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.316246 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" event={"ID":"2caab735-0708-478f-83af-e87fe20f22b8","Type":"ContainerStarted","Data":"f67ddce6443610d18b242a3c2ed0aeeb649e7d613d243da31adc51eb2f951618"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.316300 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.319971 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-92kpm" podStartSLOduration=126.319957591 podStartE2EDuration="2m6.319957591s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:00.312778324 +0000 UTC m=+148.223862130" watchObservedRunningTime="2025-10-14 06:50:00.319957591 +0000 UTC m=+148.231041397" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.328065 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz" event={"ID":"bb740ddb-f795-4ed4-b94a-7bead0d054e7","Type":"ContainerStarted","Data":"f6f6cca460e7adf94a2c9709be8e4970f8b5920e59d5ebe8d64e7d69751903bf"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.328109 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz" event={"ID":"bb740ddb-f795-4ed4-b94a-7bead0d054e7","Type":"ContainerStarted","Data":"a3b826b6f83e1b935b1db00497ce7250c26eaef02de3333daa26c986e70d1d56"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.330533 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" event={"ID":"376b3099-2ce3-4c4c-b3c3-33cd426751aa","Type":"ContainerStarted","Data":"45a706a780ea99c8e2dc509f5fdc00f7d9f0671a74d7fe83645e7feceb50ee74"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.332526 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7kwd" event={"ID":"5fb110f6-ed24-411b-a654-cd7aac0538b6","Type":"ContainerStarted","Data":"6b701feacc908d2d783dcf56c01c6b0efdee9f49fc046091be83768410d351c9"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.334566 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzzx7" event={"ID":"2e40dad8-9b26-4337-9231-d7b952047695","Type":"ContainerStarted","Data":"c54e6adadfd7b76a5fc474a27606df4563ccebc5bccde348f9d5f00e3f569d6c"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.339608 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" event={"ID":"24ce7d92-f8af-4edc-bbff-593f1c9329e1","Type":"ContainerStarted","Data":"ddec2f6847398567e129469940681499306549a923058ca473cdbb028db5f8c8"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.341417 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" event={"ID":"af0798d9-84b9-49ce-8485-8b6efe5fcd42","Type":"ContainerStarted","Data":"2b6d8883bfd4b7179ff2ba66844d9c031211ab3d4977e3e1d8e949ccce211301"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.342073 5058 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6chb4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.342104 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" podUID="af0798d9-84b9-49ce-8485-8b6efe5fcd42" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.344521 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" event={"ID":"393cb85f-5c30-42a2-94da-65eddc249939","Type":"ContainerStarted","Data":"0db4e97a5e78e99ca38a52d93d228b95406cf27e67d459bc0d3adcb9d5d5f345"} Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.349715 5058 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-s5659 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.349752 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" podUID="21155744-ed89-42cc-b09d-cd8dd1896e6d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.359357 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.362181 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w55c6" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.363026 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tt6rv" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.364978 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-j42gk" podStartSLOduration=8.364964089 podStartE2EDuration="8.364964089s" podCreationTimestamp="2025-10-14 06:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:00.350318806 +0000 UTC m=+148.261402622" watchObservedRunningTime="2025-10-14 06:50:00.364964089 +0000 UTC m=+148.276047895" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.392711 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.392823 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.892808421 +0000 UTC m=+148.803892227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.434248 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.440078 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:00.940062373 +0000 UTC m=+148.851146179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.480373 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lhzvv" podStartSLOduration=126.480357195 podStartE2EDuration="2m6.480357195s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:00.417647107 +0000 UTC m=+148.328730923" watchObservedRunningTime="2025-10-14 06:50:00.480357195 +0000 UTC m=+148.391441001" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.480766 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5g4zj" podStartSLOduration=126.480763446 podStartE2EDuration="2m6.480763446s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:00.479666255 +0000 UTC m=+148.390750061" watchObservedRunningTime="2025-10-14 06:50:00.480763446 +0000 UTC m=+148.391847252" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.536473 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.536930 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.036903974 +0000 UTC m=+148.947987780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.537055 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.537564 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.037552213 +0000 UTC m=+148.948636019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.566223 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" podStartSLOduration=126.566207409 podStartE2EDuration="2m6.566207409s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:00.566099076 +0000 UTC m=+148.477182892" watchObservedRunningTime="2025-10-14 06:50:00.566207409 +0000 UTC m=+148.477291215" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.566915 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hrvbw" podStartSLOduration=126.566911509 podStartE2EDuration="2m6.566911509s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:00.539983113 +0000 UTC m=+148.451066919" watchObservedRunningTime="2025-10-14 06:50:00.566911509 +0000 UTC m=+148.477995315" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.599945 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qmknz" podStartSLOduration=126.599929771 podStartE2EDuration="2m6.599929771s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:00.599178479 +0000 UTC m=+148.510262285" watchObservedRunningTime="2025-10-14 06:50:00.599929771 +0000 UTC m=+148.511013567" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.638990 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.639145 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.139128661 +0000 UTC m=+149.050212467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.639334 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.639632 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.139625825 +0000 UTC m=+149.050709631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.740587 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.740699 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.240684708 +0000 UTC m=+149.151768514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.741041 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.741092 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.741125 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.741156 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.741557 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.241542643 +0000 UTC m=+149.152626449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.741762 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.746551 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.749449 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.750066 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.763518 5058 patch_prober.go:28] interesting pod/router-default-5444994796-pkv6l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 06:50:00 crc kubenswrapper[5058]: [-]has-synced failed: reason withheld Oct 14 06:50:00 crc kubenswrapper[5058]: [+]process-running ok Oct 14 06:50:00 crc kubenswrapper[5058]: healthz check failed Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.763588 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pkv6l" podUID="d69e9ea9-ae21-42ae-a6f6-845adda33d54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.773835 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.843272 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.843473 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.34344671 +0000 UTC m=+149.254530516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.843581 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.843858 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.343846251 +0000 UTC m=+149.254930057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.913866 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:50:00 crc kubenswrapper[5058]: I1014 06:50:00.944822 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:00 crc kubenswrapper[5058]: E1014 06:50:00.945178 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.445162981 +0000 UTC m=+149.356246787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.012958 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.043127 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.045622 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.045954 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.545941226 +0000 UTC m=+149.457025032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.147666 5058 patch_prober.go:28] interesting pod/apiserver-76f77b778f-m4grl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 06:50:01 crc kubenswrapper[5058]: [+]log ok Oct 14 06:50:01 crc kubenswrapper[5058]: [+]etcd ok Oct 14 06:50:01 crc kubenswrapper[5058]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 06:50:01 crc kubenswrapper[5058]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 06:50:01 crc kubenswrapper[5058]: [+]poststarthook/max-in-flight-filter ok Oct 14 06:50:01 crc kubenswrapper[5058]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 06:50:01 crc kubenswrapper[5058]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 06:50:01 crc kubenswrapper[5058]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 14 06:50:01 crc kubenswrapper[5058]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Oct 14 06:50:01 crc kubenswrapper[5058]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 06:50:01 crc kubenswrapper[5058]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 06:50:01 crc kubenswrapper[5058]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Oct 14 06:50:01 crc kubenswrapper[5058]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 06:50:01 crc kubenswrapper[5058]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 06:50:01 crc kubenswrapper[5058]: livez check failed Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.147722 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" podUID="cb29cc9b-9890-4f0d-a4ca-bfc93dce8628" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.148024 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.148707 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.648692158 +0000 UTC m=+149.559775964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.249896 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.250197 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.750184363 +0000 UTC m=+149.661268169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.352305 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.352567 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.852552943 +0000 UTC m=+149.763636749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.362882 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"53da75fc810233601649f341d0545a93f00e6ffb3a9c9374431acb0a1c0fae87"} Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.376955 5058 generic.go:334] "Generic (PLEG): container finished" podID="0cab2f8a-7c4f-4faa-a89c-8d566cc1b057" containerID="7be7bdee3f79c3c2070935c0f70b7595231101391214a31adfee54b5118d8568" exitCode=0 Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.377091 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" event={"ID":"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057","Type":"ContainerDied","Data":"7be7bdee3f79c3c2070935c0f70b7595231101391214a31adfee54b5118d8568"} Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.383084 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j925t" event={"ID":"0f3b2066-2ea9-421b-863d-defe1f13cf46","Type":"ContainerStarted","Data":"b26f3f408a7fcbddc8a412459f831e403cb2a886ffd57063b4ddc7775c8fd548"} Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.384181 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j925t" event={"ID":"0f3b2066-2ea9-421b-863d-defe1f13cf46","Type":"ContainerStarted","Data":"19bfdfc4a504497ffb2aa642124bfb9c68f5892129cf322ea6ce59843e7bf70d"} Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.393567 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.454025 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.459592 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:01.959577498 +0000 UTC m=+149.870661294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.542118 5058 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.554964 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.555266 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.055222865 +0000 UTC m=+149.966306671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.555544 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.555874 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.055862063 +0000 UTC m=+149.966945869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.656892 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.657086 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.15705829 +0000 UTC m=+150.068142096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.657194 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.657557 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.157539864 +0000 UTC m=+150.068623670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.662958 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6chb4" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.756971 5058 patch_prober.go:28] interesting pod/router-default-5444994796-pkv6l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 06:50:01 crc kubenswrapper[5058]: [-]has-synced failed: reason withheld Oct 14 06:50:01 crc kubenswrapper[5058]: [+]process-running ok Oct 14 06:50:01 crc kubenswrapper[5058]: healthz check failed Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.757306 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pkv6l" podUID="d69e9ea9-ae21-42ae-a6f6-845adda33d54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.757665 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.757844 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.257826614 +0000 UTC m=+150.168910420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.757915 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.758371 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.25835427 +0000 UTC m=+150.169438076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.858624 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.858845 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.358818915 +0000 UTC m=+150.269902721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.858898 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.859316 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.359299889 +0000 UTC m=+150.270383775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.863249 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mxl4f"] Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.864361 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.865913 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.882821 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxl4f"] Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.960222 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.960368 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.460347192 +0000 UTC m=+150.371430998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.960474 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jlhm\" (UniqueName: \"kubernetes.io/projected/921a9387-d8e1-43f4-9415-f47bbb67e9e5-kube-api-access-9jlhm\") pod \"certified-operators-mxl4f\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.960557 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.960602 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-catalog-content\") pod \"certified-operators-mxl4f\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.960697 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-utilities\") pod \"certified-operators-mxl4f\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:01 crc kubenswrapper[5058]: E1014 06:50:01.961089 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 06:50:02.461064292 +0000 UTC m=+150.372148098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-khhmg" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.972360 5058 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-14T06:50:01.542468417Z","Handler":null,"Name":""} Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.976325 5058 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 14 06:50:01 crc kubenswrapper[5058]: I1014 06:50:01.976368 5058 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.055911 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mq7b7"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.056747 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.059216 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.061349 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.061502 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-catalog-content\") pod \"community-operators-mq7b7\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.061549 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jlhm\" (UniqueName: \"kubernetes.io/projected/921a9387-d8e1-43f4-9415-f47bbb67e9e5-kube-api-access-9jlhm\") pod \"certified-operators-mxl4f\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.061603 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-catalog-content\") pod \"certified-operators-mxl4f\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.061634 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q8dx\" (UniqueName: \"kubernetes.io/projected/e120bbf5-4072-4a5a-ae6f-e3aff762b987-kube-api-access-6q8dx\") pod \"community-operators-mq7b7\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.061663 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-utilities\") pod \"community-operators-mq7b7\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.061876 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-utilities\") pod \"certified-operators-mxl4f\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.062163 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-catalog-content\") pod \"certified-operators-mxl4f\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.062411 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-utilities\") pod \"certified-operators-mxl4f\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.066175 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.076147 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mq7b7"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.089888 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jlhm\" (UniqueName: \"kubernetes.io/projected/921a9387-d8e1-43f4-9415-f47bbb67e9e5-kube-api-access-9jlhm\") pod \"certified-operators-mxl4f\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.163629 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.163686 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q8dx\" (UniqueName: \"kubernetes.io/projected/e120bbf5-4072-4a5a-ae6f-e3aff762b987-kube-api-access-6q8dx\") pod \"community-operators-mq7b7\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.163710 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-utilities\") pod \"community-operators-mq7b7\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.163814 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-catalog-content\") pod \"community-operators-mq7b7\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.164290 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-utilities\") pod \"community-operators-mq7b7\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.164381 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-catalog-content\") pod \"community-operators-mq7b7\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.169630 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.169673 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.178671 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.186391 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q8dx\" (UniqueName: \"kubernetes.io/projected/e120bbf5-4072-4a5a-ae6f-e3aff762b987-kube-api-access-6q8dx\") pod \"community-operators-mq7b7\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.235783 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-khhmg\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.251207 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5z2zp"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.252082 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.299566 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5z2zp"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.307626 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.368047 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-catalog-content\") pod \"certified-operators-5z2zp\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.368144 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-utilities\") pod \"certified-operators-5z2zp\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.368177 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr8wn\" (UniqueName: \"kubernetes.io/projected/3391b890-848a-4d4b-9bd8-9ca2410e8961-kube-api-access-zr8wn\") pod \"certified-operators-5z2zp\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.369551 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.380019 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxl4f"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.394380 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j925t" event={"ID":"0f3b2066-2ea9-421b-863d-defe1f13cf46","Type":"ContainerStarted","Data":"529fd64d3637ca505d639506b4db0aa6e80ad9855be40a96c1a5e758932385a3"} Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.399544 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e061dbaab76c9487fb416477a053902fddc7a578af464b363d80952b5f962aa9"} Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.399584 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1dbc2fcb7394696770a512789fe4f512e97d02807beaeda7f8969875c790a649"} Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.403013 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"14b32db60a5cee85ca2cc08084823d436c8e97a4f95806ab7125ce597bcb5d57"} Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.403531 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.409482 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"573bbbe85b3d5cc9961997c729edbebf81f555b4a82f07671eede1b0af043ecf"} Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.409522 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"672318bc154e2ae8854bef140ab0d6f2dffa5919d74e8221a77bd45eb0b5e76e"} Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.435520 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-j925t" podStartSLOduration=10.435489575 podStartE2EDuration="10.435489575s" podCreationTimestamp="2025-10-14 06:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:02.416059475 +0000 UTC m=+150.327143281" watchObservedRunningTime="2025-10-14 06:50:02.435489575 +0000 UTC m=+150.346573391" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.460833 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9v72"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.465894 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.470192 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-utilities\") pod \"certified-operators-5z2zp\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.470262 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr8wn\" (UniqueName: \"kubernetes.io/projected/3391b890-848a-4d4b-9bd8-9ca2410e8961-kube-api-access-zr8wn\") pod \"certified-operators-5z2zp\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.470300 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-catalog-content\") pod \"certified-operators-5z2zp\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.470726 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-catalog-content\") pod \"certified-operators-5z2zp\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.470981 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-utilities\") pod \"certified-operators-5z2zp\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.478855 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9v72"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.515899 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr8wn\" (UniqueName: \"kubernetes.io/projected/3391b890-848a-4d4b-9bd8-9ca2410e8961-kube-api-access-zr8wn\") pod \"certified-operators-5z2zp\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.573758 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-catalog-content\") pod \"community-operators-p9v72\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.575044 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-utilities\") pod \"community-operators-p9v72\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.575075 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cdl\" (UniqueName: \"kubernetes.io/projected/372c99e2-16a2-46d7-8712-042849d7827b-kube-api-access-74cdl\") pod \"community-operators-p9v72\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.576850 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.676718 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-utilities\") pod \"community-operators-p9v72\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.676786 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74cdl\" (UniqueName: \"kubernetes.io/projected/372c99e2-16a2-46d7-8712-042849d7827b-kube-api-access-74cdl\") pod \"community-operators-p9v72\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.676874 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-catalog-content\") pod \"community-operators-p9v72\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.677158 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mq7b7"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.677523 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-utilities\") pod \"community-operators-p9v72\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.677536 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-catalog-content\") pod \"community-operators-p9v72\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.695184 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74cdl\" (UniqueName: \"kubernetes.io/projected/372c99e2-16a2-46d7-8712-042849d7827b-kube-api-access-74cdl\") pod \"community-operators-p9v72\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.713443 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.756571 5058 patch_prober.go:28] interesting pod/router-default-5444994796-pkv6l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 06:50:02 crc kubenswrapper[5058]: [-]has-synced failed: reason withheld Oct 14 06:50:02 crc kubenswrapper[5058]: [+]process-running ok Oct 14 06:50:02 crc kubenswrapper[5058]: healthz check failed Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.756615 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pkv6l" podUID="d69e9ea9-ae21-42ae-a6f6-845adda33d54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.806782 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.808558 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 14 06:50:02 crc kubenswrapper[5058]: W1014 06:50:02.834108 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35c3118a_120e_45b8_80fc_1c939c4a7e85.slice/crio-36e58f1fb91f078f69b47b4787ba1b4e0f23c0444f1f1c75eb240b89b85450f3 WatchSource:0}: Error finding container 36e58f1fb91f078f69b47b4787ba1b4e0f23c0444f1f1c75eb240b89b85450f3: Status 404 returned error can't find the container with id 36e58f1fb91f078f69b47b4787ba1b4e0f23c0444f1f1c75eb240b89b85450f3 Oct 14 06:50:02 crc kubenswrapper[5058]: W1014 06:50:02.869106 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3391b890_848a_4d4b_9bd8_9ca2410e8961.slice/crio-cf5b41d40a4fbf2fc40165ec812f7570c52bf64ae89b6a0808d5a4f77a2d8418 WatchSource:0}: Error finding container cf5b41d40a4fbf2fc40165ec812f7570c52bf64ae89b6a0808d5a4f77a2d8418: Status 404 returned error can't find the container with id cf5b41d40a4fbf2fc40165ec812f7570c52bf64ae89b6a0808d5a4f77a2d8418 Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.872988 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-khhmg"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.873024 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5z2zp"] Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.878279 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb2nd\" (UniqueName: \"kubernetes.io/projected/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-kube-api-access-kb2nd\") pod \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.878357 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-secret-volume\") pod \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.878434 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-config-volume\") pod \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\" (UID: \"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057\") " Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.879381 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-config-volume" (OuterVolumeSpecName: "config-volume") pod "0cab2f8a-7c4f-4faa-a89c-8d566cc1b057" (UID: "0cab2f8a-7c4f-4faa-a89c-8d566cc1b057"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.897445 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-kube-api-access-kb2nd" (OuterVolumeSpecName: "kube-api-access-kb2nd") pod "0cab2f8a-7c4f-4faa-a89c-8d566cc1b057" (UID: "0cab2f8a-7c4f-4faa-a89c-8d566cc1b057"). InnerVolumeSpecName "kube-api-access-kb2nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.897625 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0cab2f8a-7c4f-4faa-a89c-8d566cc1b057" (UID: "0cab2f8a-7c4f-4faa-a89c-8d566cc1b057"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.979922 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.979946 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb2nd\" (UniqueName: \"kubernetes.io/projected/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-kube-api-access-kb2nd\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:02 crc kubenswrapper[5058]: I1014 06:50:02.979956 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.010121 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9v72"] Oct 14 06:50:03 crc kubenswrapper[5058]: W1014 06:50:03.013617 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod372c99e2_16a2_46d7_8712_042849d7827b.slice/crio-d9c8415e7f7087574bcf45e3a6b05bbbacb8d71f85483a3af15e04a75797a783 WatchSource:0}: Error finding container d9c8415e7f7087574bcf45e3a6b05bbbacb8d71f85483a3af15e04a75797a783: Status 404 returned error can't find the container with id d9c8415e7f7087574bcf45e3a6b05bbbacb8d71f85483a3af15e04a75797a783 Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.425479 5058 generic.go:334] "Generic (PLEG): container finished" podID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerID="32ff0e875595e2d8f5bdcbe3452c5bfc57a979dd078fd411be3a370f4ac6baae" exitCode=0 Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.427076 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxl4f" event={"ID":"921a9387-d8e1-43f4-9415-f47bbb67e9e5","Type":"ContainerDied","Data":"32ff0e875595e2d8f5bdcbe3452c5bfc57a979dd078fd411be3a370f4ac6baae"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.427133 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxl4f" event={"ID":"921a9387-d8e1-43f4-9415-f47bbb67e9e5","Type":"ContainerStarted","Data":"740792567eefd6daec0d25a2314848700a86596cfbce1bb1df870755ac37aba9"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.428770 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.429686 5058 generic.go:334] "Generic (PLEG): container finished" podID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerID="4e283c84d76bef6712b3b508e836a085e6ca5b2985cc1c8013ae434cd9eb98b3" exitCode=0 Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.429739 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zp" event={"ID":"3391b890-848a-4d4b-9bd8-9ca2410e8961","Type":"ContainerDied","Data":"4e283c84d76bef6712b3b508e836a085e6ca5b2985cc1c8013ae434cd9eb98b3"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.429761 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zp" event={"ID":"3391b890-848a-4d4b-9bd8-9ca2410e8961","Type":"ContainerStarted","Data":"cf5b41d40a4fbf2fc40165ec812f7570c52bf64ae89b6a0808d5a4f77a2d8418"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.432470 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.432479 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng" event={"ID":"0cab2f8a-7c4f-4faa-a89c-8d566cc1b057","Type":"ContainerDied","Data":"6a11b5c72756237174adafaab2e4c5d08dae7d49c44bfd98f5b84a8194fa5161"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.432500 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a11b5c72756237174adafaab2e4c5d08dae7d49c44bfd98f5b84a8194fa5161" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.437847 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" event={"ID":"35c3118a-120e-45b8-80fc-1c939c4a7e85","Type":"ContainerStarted","Data":"2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.437892 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" event={"ID":"35c3118a-120e-45b8-80fc-1c939c4a7e85","Type":"ContainerStarted","Data":"36e58f1fb91f078f69b47b4787ba1b4e0f23c0444f1f1c75eb240b89b85450f3"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.438539 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.440188 5058 generic.go:334] "Generic (PLEG): container finished" podID="372c99e2-16a2-46d7-8712-042849d7827b" containerID="e4fecf3e4689f31b202efb69c386d644e030953e2611d492f9acc3cb41b79c6b" exitCode=0 Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.440241 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9v72" event={"ID":"372c99e2-16a2-46d7-8712-042849d7827b","Type":"ContainerDied","Data":"e4fecf3e4689f31b202efb69c386d644e030953e2611d492f9acc3cb41b79c6b"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.440313 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9v72" event={"ID":"372c99e2-16a2-46d7-8712-042849d7827b","Type":"ContainerStarted","Data":"d9c8415e7f7087574bcf45e3a6b05bbbacb8d71f85483a3af15e04a75797a783"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.451315 5058 generic.go:334] "Generic (PLEG): container finished" podID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerID="f9e60b4c3e2a398affc49f964e93c0262c7128014ee5db259089fa097af0a850" exitCode=0 Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.451349 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq7b7" event={"ID":"e120bbf5-4072-4a5a-ae6f-e3aff762b987","Type":"ContainerDied","Data":"f9e60b4c3e2a398affc49f964e93c0262c7128014ee5db259089fa097af0a850"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.451375 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq7b7" event={"ID":"e120bbf5-4072-4a5a-ae6f-e3aff762b987","Type":"ContainerStarted","Data":"6f47c98a355c394a3059fca19918422f75f4382b781832d3371283179b997b99"} Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.493007 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" podStartSLOduration=129.492987615 podStartE2EDuration="2m9.492987615s" podCreationTimestamp="2025-10-14 06:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:03.489984928 +0000 UTC m=+151.401068754" watchObservedRunningTime="2025-10-14 06:50:03.492987615 +0000 UTC m=+151.404071421" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.656715 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.656787 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.757975 5058 patch_prober.go:28] interesting pod/router-default-5444994796-pkv6l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 06:50:03 crc kubenswrapper[5058]: [-]has-synced failed: reason withheld Oct 14 06:50:03 crc kubenswrapper[5058]: [+]process-running ok Oct 14 06:50:03 crc kubenswrapper[5058]: healthz check failed Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.758052 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pkv6l" podUID="d69e9ea9-ae21-42ae-a6f6-845adda33d54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.858689 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpjq"] Oct 14 06:50:03 crc kubenswrapper[5058]: E1014 06:50:03.859206 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cab2f8a-7c4f-4faa-a89c-8d566cc1b057" containerName="collect-profiles" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.859226 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cab2f8a-7c4f-4faa-a89c-8d566cc1b057" containerName="collect-profiles" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.859386 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cab2f8a-7c4f-4faa-a89c-8d566cc1b057" containerName="collect-profiles" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.860266 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.875561 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 14 06:50:03 crc kubenswrapper[5058]: I1014 06:50:03.882141 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpjq"] Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.014918 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.015301 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.016256 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-catalog-content\") pod \"redhat-marketplace-zwpjq\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.016435 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-utilities\") pod \"redhat-marketplace-zwpjq\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.016610 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94vzg\" (UniqueName: \"kubernetes.io/projected/f1da2131-20c0-4609-b12a-098253e6b89c-kube-api-access-94vzg\") pod \"redhat-marketplace-zwpjq\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.017910 5058 patch_prober.go:28] interesting pod/console-f9d7485db-fhbms container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.017958 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fhbms" podUID="f05623c8-3bf6-44d3-a522-cb1175227bf4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.117858 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-catalog-content\") pod \"redhat-marketplace-zwpjq\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.117939 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-utilities\") pod \"redhat-marketplace-zwpjq\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.118007 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94vzg\" (UniqueName: \"kubernetes.io/projected/f1da2131-20c0-4609-b12a-098253e6b89c-kube-api-access-94vzg\") pod \"redhat-marketplace-zwpjq\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.118576 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-utilities\") pod \"redhat-marketplace-zwpjq\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.118721 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-catalog-content\") pod \"redhat-marketplace-zwpjq\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.150080 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94vzg\" (UniqueName: \"kubernetes.io/projected/f1da2131-20c0-4609-b12a-098253e6b89c-kube-api-access-94vzg\") pod \"redhat-marketplace-zwpjq\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.178370 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.255850 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wv9wz"] Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.260487 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv9wz"] Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.260615 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.294865 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.303710 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.303813 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.307187 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.307421 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.414351 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpjq"] Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.432263 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.432413 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmbcg\" (UniqueName: \"kubernetes.io/projected/77bee3fc-c834-41b1-9f18-9d86071ee7c0-kube-api-access-bmbcg\") pod \"redhat-marketplace-wv9wz\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.432456 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.432579 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-catalog-content\") pod \"redhat-marketplace-wv9wz\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.432639 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-utilities\") pod \"redhat-marketplace-wv9wz\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.461471 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpjq" event={"ID":"f1da2131-20c0-4609-b12a-098253e6b89c","Type":"ContainerStarted","Data":"9072f4f913a1069139eb9a2ed59ee0fed4d8301566a11b6993e8d7ec196c188b"} Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.533943 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-utilities\") pod \"redhat-marketplace-wv9wz\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.534000 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.534033 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmbcg\" (UniqueName: \"kubernetes.io/projected/77bee3fc-c834-41b1-9f18-9d86071ee7c0-kube-api-access-bmbcg\") pod \"redhat-marketplace-wv9wz\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.534060 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.534110 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-catalog-content\") pod \"redhat-marketplace-wv9wz\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.534168 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.534758 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-catalog-content\") pod \"redhat-marketplace-wv9wz\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.534788 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-utilities\") pod \"redhat-marketplace-wv9wz\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.549149 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.551424 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmbcg\" (UniqueName: \"kubernetes.io/projected/77bee3fc-c834-41b1-9f18-9d86071ee7c0-kube-api-access-bmbcg\") pod \"redhat-marketplace-wv9wz\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.582577 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.639973 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.758235 5058 patch_prober.go:28] interesting pod/router-default-5444994796-pkv6l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 06:50:04 crc kubenswrapper[5058]: [-]has-synced failed: reason withheld Oct 14 06:50:04 crc kubenswrapper[5058]: [+]process-running ok Oct 14 06:50:04 crc kubenswrapper[5058]: healthz check failed Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.758302 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pkv6l" podUID="d69e9ea9-ae21-42ae-a6f6-845adda33d54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.975714 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:50:04 crc kubenswrapper[5058]: I1014 06:50:04.982776 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-m4grl" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.010663 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv9wz"] Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.092343 5058 patch_prober.go:28] interesting pod/downloads-7954f5f757-htb67 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.092383 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-htb67" podUID="5ff50c24-42d7-4716-bf86-a6bc553086d6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.092499 5058 patch_prober.go:28] interesting pod/downloads-7954f5f757-htb67 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.092551 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-htb67" podUID="5ff50c24-42d7-4716-bf86-a6bc553086d6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.120685 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.262901 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hcl8t"] Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.263846 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.269177 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.276307 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcl8t"] Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.354450 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-utilities\") pod \"redhat-operators-hcl8t\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.354600 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-catalog-content\") pod \"redhat-operators-hcl8t\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.354726 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqdqs\" (UniqueName: \"kubernetes.io/projected/c60d1a52-405f-4ae4-afc1-9080f1e28893-kube-api-access-vqdqs\") pod \"redhat-operators-hcl8t\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.439296 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.440038 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.448228 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.448954 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.450924 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.455817 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-catalog-content\") pod \"redhat-operators-hcl8t\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.455870 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqdqs\" (UniqueName: \"kubernetes.io/projected/c60d1a52-405f-4ae4-afc1-9080f1e28893-kube-api-access-vqdqs\") pod \"redhat-operators-hcl8t\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.455908 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-utilities\") pod \"redhat-operators-hcl8t\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.456385 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-utilities\") pod \"redhat-operators-hcl8t\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.456685 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-catalog-content\") pod \"redhat-operators-hcl8t\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.469637 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e","Type":"ContainerStarted","Data":"22c9dc28dfc111a609e08e98dff67057cdefc68ade023c9e8de1a6c74a8c6c98"} Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.471517 5058 generic.go:334] "Generic (PLEG): container finished" podID="f1da2131-20c0-4609-b12a-098253e6b89c" containerID="2aa72ba686912a1261af7b38890d6728d07682ffdde98a99430084d7e41aa889" exitCode=0 Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.471601 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpjq" event={"ID":"f1da2131-20c0-4609-b12a-098253e6b89c","Type":"ContainerDied","Data":"2aa72ba686912a1261af7b38890d6728d07682ffdde98a99430084d7e41aa889"} Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.478263 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqdqs\" (UniqueName: \"kubernetes.io/projected/c60d1a52-405f-4ae4-afc1-9080f1e28893-kube-api-access-vqdqs\") pod \"redhat-operators-hcl8t\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.484592 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv9wz" event={"ID":"77bee3fc-c834-41b1-9f18-9d86071ee7c0","Type":"ContainerStarted","Data":"c5704f7aabadee907b43f746f6ddcfbbd1b429cbb9716f3a6834ddafecf53d47"} Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.557472 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.557542 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.609327 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.654883 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8wxt9"] Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.655891 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.659894 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.659976 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq4h2\" (UniqueName: \"kubernetes.io/projected/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-kube-api-access-dq4h2\") pod \"redhat-operators-8wxt9\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.659994 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-utilities\") pod \"redhat-operators-8wxt9\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.660016 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-catalog-content\") pod \"redhat-operators-8wxt9\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.660087 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.660154 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.668306 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8wxt9"] Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.681385 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.753883 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.756500 5058 patch_prober.go:28] interesting pod/router-default-5444994796-pkv6l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 06:50:05 crc kubenswrapper[5058]: [-]has-synced failed: reason withheld Oct 14 06:50:05 crc kubenswrapper[5058]: [+]process-running ok Oct 14 06:50:05 crc kubenswrapper[5058]: healthz check failed Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.756568 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pkv6l" podUID="d69e9ea9-ae21-42ae-a6f6-845adda33d54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.761086 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq4h2\" (UniqueName: \"kubernetes.io/projected/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-kube-api-access-dq4h2\") pod \"redhat-operators-8wxt9\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.761120 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-utilities\") pod \"redhat-operators-8wxt9\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.761159 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-catalog-content\") pod \"redhat-operators-8wxt9\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.761714 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-catalog-content\") pod \"redhat-operators-8wxt9\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.761981 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-utilities\") pod \"redhat-operators-8wxt9\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.782259 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq4h2\" (UniqueName: \"kubernetes.io/projected/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-kube-api-access-dq4h2\") pod \"redhat-operators-8wxt9\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.825243 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.903346 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcl8t"] Oct 14 06:50:05 crc kubenswrapper[5058]: W1014 06:50:05.917019 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc60d1a52_405f_4ae4_afc1_9080f1e28893.slice/crio-21144128a2f1eb68bb9e0bc34a0ae8c1aabceeed1fb841ee3e6b23b32ad79171 WatchSource:0}: Error finding container 21144128a2f1eb68bb9e0bc34a0ae8c1aabceeed1fb841ee3e6b23b32ad79171: Status 404 returned error can't find the container with id 21144128a2f1eb68bb9e0bc34a0ae8c1aabceeed1fb841ee3e6b23b32ad79171 Oct 14 06:50:05 crc kubenswrapper[5058]: I1014 06:50:05.980654 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.041042 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.291726 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8wxt9"] Oct 14 06:50:06 crc kubenswrapper[5058]: W1014 06:50:06.313017 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73f1fe50_6279_4e7b_9031_9fa78a3dbcb3.slice/crio-ffe250169805080da02db1af94bc82c0051c42ae1dc7e25ef7b7b54913deda93 WatchSource:0}: Error finding container ffe250169805080da02db1af94bc82c0051c42ae1dc7e25ef7b7b54913deda93: Status 404 returned error can't find the container with id ffe250169805080da02db1af94bc82c0051c42ae1dc7e25ef7b7b54913deda93 Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.517232 5058 generic.go:334] "Generic (PLEG): container finished" podID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerID="1d5a1fd59d07c2e8f59b47ab622f3aa367668281799f1e8a1e051c13e71c56cf" exitCode=0 Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.517287 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv9wz" event={"ID":"77bee3fc-c834-41b1-9f18-9d86071ee7c0","Type":"ContainerDied","Data":"1d5a1fd59d07c2e8f59b47ab622f3aa367668281799f1e8a1e051c13e71c56cf"} Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.519775 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f0c60cea-5e01-49f9-9c42-cb239acad6d4","Type":"ContainerStarted","Data":"c89e081dbd96f33908ea7947a77387c46094f2ed89735895c05164e430e2f3a6"} Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.519824 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f0c60cea-5e01-49f9-9c42-cb239acad6d4","Type":"ContainerStarted","Data":"0a1b1ba99242870d3199e1a5dcf5226b6b582218ab4cd43c3900c5cb9f3e671a"} Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.522260 5058 generic.go:334] "Generic (PLEG): container finished" podID="7d26ee15-b13a-4ab0-9eea-d0af5a89c89e" containerID="058d8ec2320c2c0966a930b051c54c6c3dc1ab96321782e0fd6f9ecfb2fbfd92" exitCode=0 Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.522403 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e","Type":"ContainerDied","Data":"058d8ec2320c2c0966a930b051c54c6c3dc1ab96321782e0fd6f9ecfb2fbfd92"} Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.524905 5058 generic.go:334] "Generic (PLEG): container finished" podID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerID="f278809a083fa8229ca52b5921cb0b808f99dd5c03a66324a354aeebcf9f616f" exitCode=0 Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.524948 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcl8t" event={"ID":"c60d1a52-405f-4ae4-afc1-9080f1e28893","Type":"ContainerDied","Data":"f278809a083fa8229ca52b5921cb0b808f99dd5c03a66324a354aeebcf9f616f"} Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.524963 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcl8t" event={"ID":"c60d1a52-405f-4ae4-afc1-9080f1e28893","Type":"ContainerStarted","Data":"21144128a2f1eb68bb9e0bc34a0ae8c1aabceeed1fb841ee3e6b23b32ad79171"} Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.528539 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wxt9" event={"ID":"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3","Type":"ContainerStarted","Data":"ffe250169805080da02db1af94bc82c0051c42ae1dc7e25ef7b7b54913deda93"} Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.552176 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.552153386 podStartE2EDuration="1.552153386s" podCreationTimestamp="2025-10-14 06:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:06.545972368 +0000 UTC m=+154.457056174" watchObservedRunningTime="2025-10-14 06:50:06.552153386 +0000 UTC m=+154.463237192" Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.758258 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:50:06 crc kubenswrapper[5058]: I1014 06:50:06.762155 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pkv6l" Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.539019 5058 generic.go:334] "Generic (PLEG): container finished" podID="f0c60cea-5e01-49f9-9c42-cb239acad6d4" containerID="c89e081dbd96f33908ea7947a77387c46094f2ed89735895c05164e430e2f3a6" exitCode=0 Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.539124 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f0c60cea-5e01-49f9-9c42-cb239acad6d4","Type":"ContainerDied","Data":"c89e081dbd96f33908ea7947a77387c46094f2ed89735895c05164e430e2f3a6"} Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.540937 5058 generic.go:334] "Generic (PLEG): container finished" podID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerID="73be4b8bec41a961326f7864e7a787ab54c55c88453bcbbe14ab73567ef7f03b" exitCode=0 Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.540999 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wxt9" event={"ID":"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3","Type":"ContainerDied","Data":"73be4b8bec41a961326f7864e7a787ab54c55c88453bcbbe14ab73567ef7f03b"} Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.790153 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.896217 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kubelet-dir\") pod \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\" (UID: \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\") " Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.896286 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kube-api-access\") pod \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\" (UID: \"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e\") " Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.897438 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7d26ee15-b13a-4ab0-9eea-d0af5a89c89e" (UID: "7d26ee15-b13a-4ab0-9eea-d0af5a89c89e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.901651 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7d26ee15-b13a-4ab0-9eea-d0af5a89c89e" (UID: "7d26ee15-b13a-4ab0-9eea-d0af5a89c89e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.997547 5058 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:07 crc kubenswrapper[5058]: I1014 06:50:07.997580 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d26ee15-b13a-4ab0-9eea-d0af5a89c89e-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:08 crc kubenswrapper[5058]: I1014 06:50:08.571055 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7d26ee15-b13a-4ab0-9eea-d0af5a89c89e","Type":"ContainerDied","Data":"22c9dc28dfc111a609e08e98dff67057cdefc68ade023c9e8de1a6c74a8c6c98"} Oct 14 06:50:08 crc kubenswrapper[5058]: I1014 06:50:08.571103 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 06:50:08 crc kubenswrapper[5058]: I1014 06:50:08.571105 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22c9dc28dfc111a609e08e98dff67057cdefc68ade023c9e8de1a6c74a8c6c98" Oct 14 06:50:10 crc kubenswrapper[5058]: I1014 06:50:10.537481 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-j42gk" Oct 14 06:50:12 crc kubenswrapper[5058]: I1014 06:50:12.633103 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:12 crc kubenswrapper[5058]: I1014 06:50:12.760629 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kubelet-dir\") pod \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\" (UID: \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\") " Oct 14 06:50:12 crc kubenswrapper[5058]: I1014 06:50:12.760732 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kube-api-access\") pod \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\" (UID: \"f0c60cea-5e01-49f9-9c42-cb239acad6d4\") " Oct 14 06:50:12 crc kubenswrapper[5058]: I1014 06:50:12.760741 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f0c60cea-5e01-49f9-9c42-cb239acad6d4" (UID: "f0c60cea-5e01-49f9-9c42-cb239acad6d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:50:12 crc kubenswrapper[5058]: I1014 06:50:12.761154 5058 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:12 crc kubenswrapper[5058]: I1014 06:50:12.779439 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f0c60cea-5e01-49f9-9c42-cb239acad6d4" (UID: "f0c60cea-5e01-49f9-9c42-cb239acad6d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:50:12 crc kubenswrapper[5058]: I1014 06:50:12.862078 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0c60cea-5e01-49f9-9c42-cb239acad6d4-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:13 crc kubenswrapper[5058]: I1014 06:50:13.556465 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:50:13 crc kubenswrapper[5058]: I1014 06:50:13.613704 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f0c60cea-5e01-49f9-9c42-cb239acad6d4","Type":"ContainerDied","Data":"0a1b1ba99242870d3199e1a5dcf5226b6b582218ab4cd43c3900c5cb9f3e671a"} Oct 14 06:50:13 crc kubenswrapper[5058]: I1014 06:50:13.613745 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a1b1ba99242870d3199e1a5dcf5226b6b582218ab4cd43c3900c5cb9f3e671a" Oct 14 06:50:13 crc kubenswrapper[5058]: I1014 06:50:13.613814 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 06:50:14 crc kubenswrapper[5058]: I1014 06:50:14.014640 5058 patch_prober.go:28] interesting pod/console-f9d7485db-fhbms container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Oct 14 06:50:14 crc kubenswrapper[5058]: I1014 06:50:14.014698 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fhbms" podUID="f05623c8-3bf6-44d3-a522-cb1175227bf4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Oct 14 06:50:15 crc kubenswrapper[5058]: I1014 06:50:15.083995 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-htb67" Oct 14 06:50:15 crc kubenswrapper[5058]: I1014 06:50:15.501461 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:50:15 crc kubenswrapper[5058]: I1014 06:50:15.510709 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a70e631f-95b4-451e-821b-8b9297428934-metrics-certs\") pod \"network-metrics-daemon-ckdsj\" (UID: \"a70e631f-95b4-451e-821b-8b9297428934\") " pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:50:15 crc kubenswrapper[5058]: I1014 06:50:15.731405 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ckdsj" Oct 14 06:50:22 crc kubenswrapper[5058]: I1014 06:50:22.314494 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:50:24 crc kubenswrapper[5058]: I1014 06:50:24.020618 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:50:24 crc kubenswrapper[5058]: I1014 06:50:24.024503 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fhbms" Oct 14 06:50:24 crc kubenswrapper[5058]: E1014 06:50:24.193829 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 14 06:50:24 crc kubenswrapper[5058]: E1014 06:50:24.194032 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zr8wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5z2zp_openshift-marketplace(3391b890-848a-4d4b-9bd8-9ca2410e8961): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 06:50:24 crc kubenswrapper[5058]: E1014 06:50:24.195272 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5z2zp" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" Oct 14 06:50:24 crc kubenswrapper[5058]: E1014 06:50:24.228048 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 14 06:50:24 crc kubenswrapper[5058]: E1014 06:50:24.228485 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94vzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zwpjq_openshift-marketplace(f1da2131-20c0-4609-b12a-098253e6b89c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 06:50:24 crc kubenswrapper[5058]: E1014 06:50:24.229746 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zwpjq" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" Oct 14 06:50:24 crc kubenswrapper[5058]: E1014 06:50:24.278242 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 14 06:50:24 crc kubenswrapper[5058]: E1014 06:50:24.278403 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q8dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mq7b7_openshift-marketplace(e120bbf5-4072-4a5a-ae6f-e3aff762b987): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 06:50:24 crc kubenswrapper[5058]: E1014 06:50:24.279588 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mq7b7" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" Oct 14 06:50:26 crc kubenswrapper[5058]: E1014 06:50:26.935196 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zwpjq" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" Oct 14 06:50:26 crc kubenswrapper[5058]: E1014 06:50:26.945781 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mq7b7" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" Oct 14 06:50:26 crc kubenswrapper[5058]: E1014 06:50:26.945900 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5z2zp" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.204655 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ckdsj"] Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.696588 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" event={"ID":"a70e631f-95b4-451e-821b-8b9297428934","Type":"ContainerStarted","Data":"88ee3ed4ce645c851f1962b875a85379699ec0bb948f37c9f2a744cb6f9a6282"} Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.696888 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" event={"ID":"a70e631f-95b4-451e-821b-8b9297428934","Type":"ContainerStarted","Data":"9800a52f82a299dd801f37f6f44e25a9be552e5c200fb82d46dccf9b18062b6e"} Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.696902 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ckdsj" event={"ID":"a70e631f-95b4-451e-821b-8b9297428934","Type":"ContainerStarted","Data":"2938764c1e0d34d5a90ec7445a4f690ce1b5cf25ba93e24ec88d09293698554b"} Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.698566 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcl8t" event={"ID":"c60d1a52-405f-4ae4-afc1-9080f1e28893","Type":"ContainerStarted","Data":"c56b71401e5de423183353d8264cf225bd1d41ba79de11a9a602ac1e5655cf1d"} Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.701130 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wxt9" event={"ID":"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3","Type":"ContainerStarted","Data":"114ba8a5b5d598627f02c31a680c44aafa3164e3d5e2e61142f604b2557c5b9e"} Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.703785 5058 generic.go:334] "Generic (PLEG): container finished" podID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerID="7de781a425856bf18a6326d991ee9b127e10cda15a2b8730eeffb0e0a25850ec" exitCode=0 Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.703909 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxl4f" event={"ID":"921a9387-d8e1-43f4-9415-f47bbb67e9e5","Type":"ContainerDied","Data":"7de781a425856bf18a6326d991ee9b127e10cda15a2b8730eeffb0e0a25850ec"} Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.705903 5058 generic.go:334] "Generic (PLEG): container finished" podID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerID="6e3185232a03c01ae4b8ac3affb2aec4c2b15b1e520b86090370ca5cc5a70bab" exitCode=0 Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.705947 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv9wz" event={"ID":"77bee3fc-c834-41b1-9f18-9d86071ee7c0","Type":"ContainerDied","Data":"6e3185232a03c01ae4b8ac3affb2aec4c2b15b1e520b86090370ca5cc5a70bab"} Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.708418 5058 generic.go:334] "Generic (PLEG): container finished" podID="372c99e2-16a2-46d7-8712-042849d7827b" containerID="05c6fb0121f050571efd83528f6a559a32c93e3a0a0ddf014966d186acf3ca8d" exitCode=0 Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.708446 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9v72" event={"ID":"372c99e2-16a2-46d7-8712-042849d7827b","Type":"ContainerDied","Data":"05c6fb0121f050571efd83528f6a559a32c93e3a0a0ddf014966d186acf3ca8d"} Oct 14 06:50:27 crc kubenswrapper[5058]: I1014 06:50:27.744886 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ckdsj" podStartSLOduration=154.744869205 podStartE2EDuration="2m34.744869205s" podCreationTimestamp="2025-10-14 06:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:50:27.721057879 +0000 UTC m=+175.632141685" watchObservedRunningTime="2025-10-14 06:50:27.744869205 +0000 UTC m=+175.655953011" Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.721526 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv9wz" event={"ID":"77bee3fc-c834-41b1-9f18-9d86071ee7c0","Type":"ContainerStarted","Data":"acc23e381e84e46101b486cd0f38618cd80f8eb242a0fcd16431625ac4ed69dd"} Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.726427 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9v72" event={"ID":"372c99e2-16a2-46d7-8712-042849d7827b","Type":"ContainerStarted","Data":"559dca0aa3597e5c2e2b0cf77ed8d852fa7a2c31ad1c537c7dd3a624c3ce0dba"} Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.733142 5058 generic.go:334] "Generic (PLEG): container finished" podID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerID="c56b71401e5de423183353d8264cf225bd1d41ba79de11a9a602ac1e5655cf1d" exitCode=0 Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.733216 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcl8t" event={"ID":"c60d1a52-405f-4ae4-afc1-9080f1e28893","Type":"ContainerDied","Data":"c56b71401e5de423183353d8264cf225bd1d41ba79de11a9a602ac1e5655cf1d"} Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.737325 5058 generic.go:334] "Generic (PLEG): container finished" podID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerID="114ba8a5b5d598627f02c31a680c44aafa3164e3d5e2e61142f604b2557c5b9e" exitCode=0 Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.737673 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wxt9" event={"ID":"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3","Type":"ContainerDied","Data":"114ba8a5b5d598627f02c31a680c44aafa3164e3d5e2e61142f604b2557c5b9e"} Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.742677 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxl4f" event={"ID":"921a9387-d8e1-43f4-9415-f47bbb67e9e5","Type":"ContainerStarted","Data":"672d4ce31541803d34f630564652050fcd13ebc723a031f46e18c2b96fed81c1"} Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.767844 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wv9wz" podStartSLOduration=3.080125123 podStartE2EDuration="24.767781058s" podCreationTimestamp="2025-10-14 06:50:04 +0000 UTC" firstStartedPulling="2025-10-14 06:50:06.520519884 +0000 UTC m=+154.431603690" lastFinishedPulling="2025-10-14 06:50:28.208175809 +0000 UTC m=+176.119259625" observedRunningTime="2025-10-14 06:50:28.747198925 +0000 UTC m=+176.658282781" watchObservedRunningTime="2025-10-14 06:50:28.767781058 +0000 UTC m=+176.678864944" Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.782721 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9v72" podStartSLOduration=1.9733901010000001 podStartE2EDuration="26.782697978s" podCreationTimestamp="2025-10-14 06:50:02 +0000 UTC" firstStartedPulling="2025-10-14 06:50:03.441573313 +0000 UTC m=+151.352657119" lastFinishedPulling="2025-10-14 06:50:28.25088117 +0000 UTC m=+176.161964996" observedRunningTime="2025-10-14 06:50:28.778367873 +0000 UTC m=+176.689451739" watchObservedRunningTime="2025-10-14 06:50:28.782697978 +0000 UTC m=+176.693781794" Oct 14 06:50:28 crc kubenswrapper[5058]: I1014 06:50:28.804518 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mxl4f" podStartSLOduration=2.983764441 podStartE2EDuration="27.804498357s" podCreationTimestamp="2025-10-14 06:50:01 +0000 UTC" firstStartedPulling="2025-10-14 06:50:03.428501526 +0000 UTC m=+151.339585332" lastFinishedPulling="2025-10-14 06:50:28.249235442 +0000 UTC m=+176.160319248" observedRunningTime="2025-10-14 06:50:28.803961441 +0000 UTC m=+176.715045257" watchObservedRunningTime="2025-10-14 06:50:28.804498357 +0000 UTC m=+176.715582173" Oct 14 06:50:29 crc kubenswrapper[5058]: I1014 06:50:29.750275 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wxt9" event={"ID":"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3","Type":"ContainerStarted","Data":"c629a22d14d63987d08b706be9e260343572e55cf7a36b785ed43e7c41decfbf"} Oct 14 06:50:29 crc kubenswrapper[5058]: I1014 06:50:29.753049 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcl8t" event={"ID":"c60d1a52-405f-4ae4-afc1-9080f1e28893","Type":"ContainerStarted","Data":"5e322f23d26cb88857e1313a75a33835bc61cf46db3e3e5ec32059c2d4bf45c1"} Oct 14 06:50:29 crc kubenswrapper[5058]: I1014 06:50:29.785716 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8wxt9" podStartSLOduration=3.166242478 podStartE2EDuration="24.785697857s" podCreationTimestamp="2025-10-14 06:50:05 +0000 UTC" firstStartedPulling="2025-10-14 06:50:07.545889028 +0000 UTC m=+155.456972834" lastFinishedPulling="2025-10-14 06:50:29.165344407 +0000 UTC m=+177.076428213" observedRunningTime="2025-10-14 06:50:29.770214101 +0000 UTC m=+177.681297907" watchObservedRunningTime="2025-10-14 06:50:29.785697857 +0000 UTC m=+177.696781663" Oct 14 06:50:32 crc kubenswrapper[5058]: I1014 06:50:32.179453 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:32 crc kubenswrapper[5058]: I1014 06:50:32.179545 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:32 crc kubenswrapper[5058]: I1014 06:50:32.360946 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:32 crc kubenswrapper[5058]: I1014 06:50:32.392586 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hcl8t" podStartSLOduration=4.443004786 podStartE2EDuration="27.392554962s" podCreationTimestamp="2025-10-14 06:50:05 +0000 UTC" firstStartedPulling="2025-10-14 06:50:06.531711117 +0000 UTC m=+154.442794923" lastFinishedPulling="2025-10-14 06:50:29.481261293 +0000 UTC m=+177.392345099" observedRunningTime="2025-10-14 06:50:29.786203922 +0000 UTC m=+177.697287738" watchObservedRunningTime="2025-10-14 06:50:32.392554962 +0000 UTC m=+180.303638808" Oct 14 06:50:32 crc kubenswrapper[5058]: I1014 06:50:32.807693 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:32 crc kubenswrapper[5058]: I1014 06:50:32.810179 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:32 crc kubenswrapper[5058]: I1014 06:50:32.863990 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:33 crc kubenswrapper[5058]: I1014 06:50:33.656571 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:50:33 crc kubenswrapper[5058]: I1014 06:50:33.656652 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:50:33 crc kubenswrapper[5058]: I1014 06:50:33.840252 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:34 crc kubenswrapper[5058]: I1014 06:50:34.583030 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:34 crc kubenswrapper[5058]: I1014 06:50:34.584130 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:34 crc kubenswrapper[5058]: I1014 06:50:34.640246 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:34 crc kubenswrapper[5058]: I1014 06:50:34.833444 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:35 crc kubenswrapper[5058]: I1014 06:50:35.610449 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:35 crc kubenswrapper[5058]: I1014 06:50:35.610549 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:35 crc kubenswrapper[5058]: I1014 06:50:35.681357 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:35 crc kubenswrapper[5058]: I1014 06:50:35.784949 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cd2lf" Oct 14 06:50:35 crc kubenswrapper[5058]: I1014 06:50:35.918624 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:50:35 crc kubenswrapper[5058]: I1014 06:50:35.963461 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9v72"] Oct 14 06:50:35 crc kubenswrapper[5058]: I1014 06:50:35.964214 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p9v72" podUID="372c99e2-16a2-46d7-8712-042849d7827b" containerName="registry-server" containerID="cri-o://559dca0aa3597e5c2e2b0cf77ed8d852fa7a2c31ad1c537c7dd3a624c3ce0dba" gracePeriod=2 Oct 14 06:50:35 crc kubenswrapper[5058]: I1014 06:50:35.981773 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:35 crc kubenswrapper[5058]: I1014 06:50:35.981874 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.041309 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.804222 5058 generic.go:334] "Generic (PLEG): container finished" podID="372c99e2-16a2-46d7-8712-042849d7827b" containerID="559dca0aa3597e5c2e2b0cf77ed8d852fa7a2c31ad1c537c7dd3a624c3ce0dba" exitCode=0 Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.805229 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9v72" event={"ID":"372c99e2-16a2-46d7-8712-042849d7827b","Type":"ContainerDied","Data":"559dca0aa3597e5c2e2b0cf77ed8d852fa7a2c31ad1c537c7dd3a624c3ce0dba"} Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.885122 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.923847 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.945313 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-utilities\") pod \"372c99e2-16a2-46d7-8712-042849d7827b\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.945393 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-catalog-content\") pod \"372c99e2-16a2-46d7-8712-042849d7827b\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.945450 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74cdl\" (UniqueName: \"kubernetes.io/projected/372c99e2-16a2-46d7-8712-042849d7827b-kube-api-access-74cdl\") pod \"372c99e2-16a2-46d7-8712-042849d7827b\" (UID: \"372c99e2-16a2-46d7-8712-042849d7827b\") " Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.948476 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-utilities" (OuterVolumeSpecName: "utilities") pod "372c99e2-16a2-46d7-8712-042849d7827b" (UID: "372c99e2-16a2-46d7-8712-042849d7827b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:50:36 crc kubenswrapper[5058]: I1014 06:50:36.997427 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372c99e2-16a2-46d7-8712-042849d7827b-kube-api-access-74cdl" (OuterVolumeSpecName: "kube-api-access-74cdl") pod "372c99e2-16a2-46d7-8712-042849d7827b" (UID: "372c99e2-16a2-46d7-8712-042849d7827b"). InnerVolumeSpecName "kube-api-access-74cdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.016025 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "372c99e2-16a2-46d7-8712-042849d7827b" (UID: "372c99e2-16a2-46d7-8712-042849d7827b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.048337 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.048372 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372c99e2-16a2-46d7-8712-042849d7827b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.048384 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74cdl\" (UniqueName: \"kubernetes.io/projected/372c99e2-16a2-46d7-8712-042849d7827b-kube-api-access-74cdl\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.814652 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9v72" event={"ID":"372c99e2-16a2-46d7-8712-042849d7827b","Type":"ContainerDied","Data":"d9c8415e7f7087574bcf45e3a6b05bbbacb8d71f85483a3af15e04a75797a783"} Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.814676 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9v72" Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.814730 5058 scope.go:117] "RemoveContainer" containerID="559dca0aa3597e5c2e2b0cf77ed8d852fa7a2c31ad1c537c7dd3a624c3ce0dba" Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.843767 5058 scope.go:117] "RemoveContainer" containerID="05c6fb0121f050571efd83528f6a559a32c93e3a0a0ddf014966d186acf3ca8d" Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.855893 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9v72"] Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.860165 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p9v72"] Oct 14 06:50:37 crc kubenswrapper[5058]: I1014 06:50:37.872749 5058 scope.go:117] "RemoveContainer" containerID="e4fecf3e4689f31b202efb69c386d644e030953e2611d492f9acc3cb41b79c6b" Oct 14 06:50:38 crc kubenswrapper[5058]: I1014 06:50:38.161612 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv9wz"] Oct 14 06:50:38 crc kubenswrapper[5058]: I1014 06:50:38.162612 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wv9wz" podUID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerName="registry-server" containerID="cri-o://acc23e381e84e46101b486cd0f38618cd80f8eb242a0fcd16431625ac4ed69dd" gracePeriod=2 Oct 14 06:50:38 crc kubenswrapper[5058]: I1014 06:50:38.360936 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8wxt9"] Oct 14 06:50:38 crc kubenswrapper[5058]: I1014 06:50:38.797147 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372c99e2-16a2-46d7-8712-042849d7827b" path="/var/lib/kubelet/pods/372c99e2-16a2-46d7-8712-042849d7827b/volumes" Oct 14 06:50:38 crc kubenswrapper[5058]: I1014 06:50:38.824397 5058 generic.go:334] "Generic (PLEG): container finished" podID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerID="acc23e381e84e46101b486cd0f38618cd80f8eb242a0fcd16431625ac4ed69dd" exitCode=0 Oct 14 06:50:38 crc kubenswrapper[5058]: I1014 06:50:38.824461 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv9wz" event={"ID":"77bee3fc-c834-41b1-9f18-9d86071ee7c0","Type":"ContainerDied","Data":"acc23e381e84e46101b486cd0f38618cd80f8eb242a0fcd16431625ac4ed69dd"} Oct 14 06:50:38 crc kubenswrapper[5058]: I1014 06:50:38.825992 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8wxt9" podUID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerName="registry-server" containerID="cri-o://c629a22d14d63987d08b706be9e260343572e55cf7a36b785ed43e7c41decfbf" gracePeriod=2 Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.112998 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.175228 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-catalog-content\") pod \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.175324 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmbcg\" (UniqueName: \"kubernetes.io/projected/77bee3fc-c834-41b1-9f18-9d86071ee7c0-kube-api-access-bmbcg\") pod \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.175443 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-utilities\") pod \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\" (UID: \"77bee3fc-c834-41b1-9f18-9d86071ee7c0\") " Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.176969 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-utilities" (OuterVolumeSpecName: "utilities") pod "77bee3fc-c834-41b1-9f18-9d86071ee7c0" (UID: "77bee3fc-c834-41b1-9f18-9d86071ee7c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.184988 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77bee3fc-c834-41b1-9f18-9d86071ee7c0-kube-api-access-bmbcg" (OuterVolumeSpecName: "kube-api-access-bmbcg") pod "77bee3fc-c834-41b1-9f18-9d86071ee7c0" (UID: "77bee3fc-c834-41b1-9f18-9d86071ee7c0"). InnerVolumeSpecName "kube-api-access-bmbcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.198754 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77bee3fc-c834-41b1-9f18-9d86071ee7c0" (UID: "77bee3fc-c834-41b1-9f18-9d86071ee7c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.276993 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.277047 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bee3fc-c834-41b1-9f18-9d86071ee7c0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.277065 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmbcg\" (UniqueName: \"kubernetes.io/projected/77bee3fc-c834-41b1-9f18-9d86071ee7c0-kube-api-access-bmbcg\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.837090 5058 generic.go:334] "Generic (PLEG): container finished" podID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerID="c629a22d14d63987d08b706be9e260343572e55cf7a36b785ed43e7c41decfbf" exitCode=0 Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.837187 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wxt9" event={"ID":"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3","Type":"ContainerDied","Data":"c629a22d14d63987d08b706be9e260343572e55cf7a36b785ed43e7c41decfbf"} Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.840558 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wv9wz" event={"ID":"77bee3fc-c834-41b1-9f18-9d86071ee7c0","Type":"ContainerDied","Data":"c5704f7aabadee907b43f746f6ddcfbbd1b429cbb9716f3a6834ddafecf53d47"} Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.840725 5058 scope.go:117] "RemoveContainer" containerID="acc23e381e84e46101b486cd0f38618cd80f8eb242a0fcd16431625ac4ed69dd" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.841237 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wv9wz" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.882004 5058 scope.go:117] "RemoveContainer" containerID="6e3185232a03c01ae4b8ac3affb2aec4c2b15b1e520b86090370ca5cc5a70bab" Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.903641 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv9wz"] Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.908323 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wv9wz"] Oct 14 06:50:39 crc kubenswrapper[5058]: I1014 06:50:39.910114 5058 scope.go:117] "RemoveContainer" containerID="1d5a1fd59d07c2e8f59b47ab622f3aa367668281799f1e8a1e051c13e71c56cf" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.217748 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.302366 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-catalog-content\") pod \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.302475 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-utilities\") pod \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.302672 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq4h2\" (UniqueName: \"kubernetes.io/projected/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-kube-api-access-dq4h2\") pod \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\" (UID: \"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3\") " Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.303200 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-utilities" (OuterVolumeSpecName: "utilities") pod "73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" (UID: "73f1fe50-6279-4e7b-9031-9fa78a3dbcb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.304036 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.308252 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-kube-api-access-dq4h2" (OuterVolumeSpecName: "kube-api-access-dq4h2") pod "73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" (UID: "73f1fe50-6279-4e7b-9031-9fa78a3dbcb3"). InnerVolumeSpecName "kube-api-access-dq4h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.405314 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq4h2\" (UniqueName: \"kubernetes.io/projected/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-kube-api-access-dq4h2\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.545480 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" (UID: "73f1fe50-6279-4e7b-9031-9fa78a3dbcb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.607508 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.800637 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" path="/var/lib/kubelet/pods/77bee3fc-c834-41b1-9f18-9d86071ee7c0/volumes" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.849777 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8wxt9" event={"ID":"73f1fe50-6279-4e7b-9031-9fa78a3dbcb3","Type":"ContainerDied","Data":"ffe250169805080da02db1af94bc82c0051c42ae1dc7e25ef7b7b54913deda93"} Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.849864 5058 scope.go:117] "RemoveContainer" containerID="c629a22d14d63987d08b706be9e260343572e55cf7a36b785ed43e7c41decfbf" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.849913 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8wxt9" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.881726 5058 scope.go:117] "RemoveContainer" containerID="114ba8a5b5d598627f02c31a680c44aafa3164e3d5e2e61142f604b2557c5b9e" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.882967 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8wxt9"] Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.888245 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8wxt9"] Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.906185 5058 scope.go:117] "RemoveContainer" containerID="73be4b8bec41a961326f7864e7a787ab54c55c88453bcbbe14ab73567ef7f03b" Oct 14 06:50:40 crc kubenswrapper[5058]: I1014 06:50:40.922096 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 06:50:42 crc kubenswrapper[5058]: I1014 06:50:42.251995 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:50:42 crc kubenswrapper[5058]: I1014 06:50:42.808824 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" path="/var/lib/kubelet/pods/73f1fe50-6279-4e7b-9031-9fa78a3dbcb3/volumes" Oct 14 06:50:42 crc kubenswrapper[5058]: I1014 06:50:42.878839 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zp" event={"ID":"3391b890-848a-4d4b-9bd8-9ca2410e8961","Type":"ContainerStarted","Data":"0bb574f11e2b42dba1e586c701080d254c11f32d7562c78a6a855ba2c3362564"} Oct 14 06:50:42 crc kubenswrapper[5058]: I1014 06:50:42.880974 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq7b7" event={"ID":"e120bbf5-4072-4a5a-ae6f-e3aff762b987","Type":"ContainerStarted","Data":"8ac15b3afde645d2c6bb5e0ef6ff7eb7f3a0796561e498e4e7a5da44a44bcf20"} Oct 14 06:50:42 crc kubenswrapper[5058]: I1014 06:50:42.886001 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpjq" event={"ID":"f1da2131-20c0-4609-b12a-098253e6b89c","Type":"ContainerStarted","Data":"120c6b566e3fab6073b11cce2fbb2ce4eafdc364c924afa00ccb2c250e67738d"} Oct 14 06:50:43 crc kubenswrapper[5058]: I1014 06:50:43.895503 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpjq" event={"ID":"f1da2131-20c0-4609-b12a-098253e6b89c","Type":"ContainerDied","Data":"120c6b566e3fab6073b11cce2fbb2ce4eafdc364c924afa00ccb2c250e67738d"} Oct 14 06:50:43 crc kubenswrapper[5058]: I1014 06:50:43.895509 5058 generic.go:334] "Generic (PLEG): container finished" podID="f1da2131-20c0-4609-b12a-098253e6b89c" containerID="120c6b566e3fab6073b11cce2fbb2ce4eafdc364c924afa00ccb2c250e67738d" exitCode=0 Oct 14 06:50:43 crc kubenswrapper[5058]: I1014 06:50:43.898998 5058 generic.go:334] "Generic (PLEG): container finished" podID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerID="0bb574f11e2b42dba1e586c701080d254c11f32d7562c78a6a855ba2c3362564" exitCode=0 Oct 14 06:50:43 crc kubenswrapper[5058]: I1014 06:50:43.899058 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zp" event={"ID":"3391b890-848a-4d4b-9bd8-9ca2410e8961","Type":"ContainerDied","Data":"0bb574f11e2b42dba1e586c701080d254c11f32d7562c78a6a855ba2c3362564"} Oct 14 06:50:43 crc kubenswrapper[5058]: I1014 06:50:43.901173 5058 generic.go:334] "Generic (PLEG): container finished" podID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerID="8ac15b3afde645d2c6bb5e0ef6ff7eb7f3a0796561e498e4e7a5da44a44bcf20" exitCode=0 Oct 14 06:50:43 crc kubenswrapper[5058]: I1014 06:50:43.901217 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq7b7" event={"ID":"e120bbf5-4072-4a5a-ae6f-e3aff762b987","Type":"ContainerDied","Data":"8ac15b3afde645d2c6bb5e0ef6ff7eb7f3a0796561e498e4e7a5da44a44bcf20"} Oct 14 06:50:45 crc kubenswrapper[5058]: I1014 06:50:45.916991 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zp" event={"ID":"3391b890-848a-4d4b-9bd8-9ca2410e8961","Type":"ContainerStarted","Data":"55c86c2e185201712debd3d994ec735f8090314e71b92bfc336f39c640697737"} Oct 14 06:50:45 crc kubenswrapper[5058]: I1014 06:50:45.920505 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq7b7" event={"ID":"e120bbf5-4072-4a5a-ae6f-e3aff762b987","Type":"ContainerStarted","Data":"eea888bb699198fe1e0d14b0fce712995845767c0dc4523e365144b839922df2"} Oct 14 06:50:45 crc kubenswrapper[5058]: I1014 06:50:45.923135 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpjq" event={"ID":"f1da2131-20c0-4609-b12a-098253e6b89c","Type":"ContainerStarted","Data":"f85392f2160b60a7adb0a729b48b6e3da74acd2bd8e423b7a1d5c35d30eef559"} Oct 14 06:50:45 crc kubenswrapper[5058]: I1014 06:50:45.939393 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5z2zp" podStartSLOduration=2.503832359 podStartE2EDuration="43.9393815s" podCreationTimestamp="2025-10-14 06:50:02 +0000 UTC" firstStartedPulling="2025-10-14 06:50:03.431684058 +0000 UTC m=+151.342767864" lastFinishedPulling="2025-10-14 06:50:44.867233169 +0000 UTC m=+192.778317005" observedRunningTime="2025-10-14 06:50:45.937162756 +0000 UTC m=+193.848246562" watchObservedRunningTime="2025-10-14 06:50:45.9393815 +0000 UTC m=+193.850465306" Oct 14 06:50:45 crc kubenswrapper[5058]: I1014 06:50:45.954142 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwpjq" podStartSLOduration=3.069457944 podStartE2EDuration="42.954132035s" podCreationTimestamp="2025-10-14 06:50:03 +0000 UTC" firstStartedPulling="2025-10-14 06:50:05.475816004 +0000 UTC m=+153.386899810" lastFinishedPulling="2025-10-14 06:50:45.360490065 +0000 UTC m=+193.271573901" observedRunningTime="2025-10-14 06:50:45.952854418 +0000 UTC m=+193.863938244" watchObservedRunningTime="2025-10-14 06:50:45.954132035 +0000 UTC m=+193.865215841" Oct 14 06:50:45 crc kubenswrapper[5058]: I1014 06:50:45.966572 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mq7b7" podStartSLOduration=2.369521159 podStartE2EDuration="43.966556293s" podCreationTimestamp="2025-10-14 06:50:02 +0000 UTC" firstStartedPulling="2025-10-14 06:50:03.454820965 +0000 UTC m=+151.365904771" lastFinishedPulling="2025-10-14 06:50:45.051856099 +0000 UTC m=+192.962939905" observedRunningTime="2025-10-14 06:50:45.965119772 +0000 UTC m=+193.876203588" watchObservedRunningTime="2025-10-14 06:50:45.966556293 +0000 UTC m=+193.877640099" Oct 14 06:50:52 crc kubenswrapper[5058]: I1014 06:50:52.370756 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:52 crc kubenswrapper[5058]: I1014 06:50:52.371521 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:52 crc kubenswrapper[5058]: I1014 06:50:52.417566 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:52 crc kubenswrapper[5058]: I1014 06:50:52.578108 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:52 crc kubenswrapper[5058]: I1014 06:50:52.578158 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:52 crc kubenswrapper[5058]: I1014 06:50:52.624738 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:52 crc kubenswrapper[5058]: I1014 06:50:52.985534 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:52 crc kubenswrapper[5058]: I1014 06:50:52.998546 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:50:54 crc kubenswrapper[5058]: I1014 06:50:54.179030 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:54 crc kubenswrapper[5058]: I1014 06:50:54.179079 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:54 crc kubenswrapper[5058]: I1014 06:50:54.225267 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:55 crc kubenswrapper[5058]: I1014 06:50:55.015194 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:50:56 crc kubenswrapper[5058]: I1014 06:50:56.754255 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5z2zp"] Oct 14 06:50:56 crc kubenswrapper[5058]: I1014 06:50:56.754854 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5z2zp" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerName="registry-server" containerID="cri-o://55c86c2e185201712debd3d994ec735f8090314e71b92bfc336f39c640697737" gracePeriod=2 Oct 14 06:50:56 crc kubenswrapper[5058]: I1014 06:50:56.971586 5058 generic.go:334] "Generic (PLEG): container finished" podID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerID="55c86c2e185201712debd3d994ec735f8090314e71b92bfc336f39c640697737" exitCode=0 Oct 14 06:50:56 crc kubenswrapper[5058]: I1014 06:50:56.971861 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zp" event={"ID":"3391b890-848a-4d4b-9bd8-9ca2410e8961","Type":"ContainerDied","Data":"55c86c2e185201712debd3d994ec735f8090314e71b92bfc336f39c640697737"} Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.191286 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.324781 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-utilities\") pod \"3391b890-848a-4d4b-9bd8-9ca2410e8961\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.324864 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-catalog-content\") pod \"3391b890-848a-4d4b-9bd8-9ca2410e8961\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.324900 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr8wn\" (UniqueName: \"kubernetes.io/projected/3391b890-848a-4d4b-9bd8-9ca2410e8961-kube-api-access-zr8wn\") pod \"3391b890-848a-4d4b-9bd8-9ca2410e8961\" (UID: \"3391b890-848a-4d4b-9bd8-9ca2410e8961\") " Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.326181 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-utilities" (OuterVolumeSpecName: "utilities") pod "3391b890-848a-4d4b-9bd8-9ca2410e8961" (UID: "3391b890-848a-4d4b-9bd8-9ca2410e8961"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.333671 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3391b890-848a-4d4b-9bd8-9ca2410e8961-kube-api-access-zr8wn" (OuterVolumeSpecName: "kube-api-access-zr8wn") pod "3391b890-848a-4d4b-9bd8-9ca2410e8961" (UID: "3391b890-848a-4d4b-9bd8-9ca2410e8961"). InnerVolumeSpecName "kube-api-access-zr8wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.370864 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3391b890-848a-4d4b-9bd8-9ca2410e8961" (UID: "3391b890-848a-4d4b-9bd8-9ca2410e8961"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.426256 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.426288 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr8wn\" (UniqueName: \"kubernetes.io/projected/3391b890-848a-4d4b-9bd8-9ca2410e8961-kube-api-access-zr8wn\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.426300 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3391b890-848a-4d4b-9bd8-9ca2410e8961-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.977682 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5z2zp" event={"ID":"3391b890-848a-4d4b-9bd8-9ca2410e8961","Type":"ContainerDied","Data":"cf5b41d40a4fbf2fc40165ec812f7570c52bf64ae89b6a0808d5a4f77a2d8418"} Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.977721 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5z2zp" Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.977746 5058 scope.go:117] "RemoveContainer" containerID="55c86c2e185201712debd3d994ec735f8090314e71b92bfc336f39c640697737" Oct 14 06:50:57 crc kubenswrapper[5058]: I1014 06:50:57.994522 5058 scope.go:117] "RemoveContainer" containerID="0bb574f11e2b42dba1e586c701080d254c11f32d7562c78a6a855ba2c3362564" Oct 14 06:50:58 crc kubenswrapper[5058]: I1014 06:50:58.003864 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5z2zp"] Oct 14 06:50:58 crc kubenswrapper[5058]: I1014 06:50:58.007114 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5z2zp"] Oct 14 06:50:58 crc kubenswrapper[5058]: I1014 06:50:58.024617 5058 scope.go:117] "RemoveContainer" containerID="4e283c84d76bef6712b3b508e836a085e6ca5b2985cc1c8013ae434cd9eb98b3" Oct 14 06:50:58 crc kubenswrapper[5058]: I1014 06:50:58.800698 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" path="/var/lib/kubelet/pods/3391b890-848a-4d4b-9bd8-9ca2410e8961/volumes" Oct 14 06:51:03 crc kubenswrapper[5058]: I1014 06:51:03.656730 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:51:03 crc kubenswrapper[5058]: I1014 06:51:03.657565 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:51:03 crc kubenswrapper[5058]: I1014 06:51:03.657643 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:51:03 crc kubenswrapper[5058]: I1014 06:51:03.658678 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 06:51:03 crc kubenswrapper[5058]: I1014 06:51:03.658906 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc" gracePeriod=600 Oct 14 06:51:04 crc kubenswrapper[5058]: I1014 06:51:04.010764 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc" exitCode=0 Oct 14 06:51:04 crc kubenswrapper[5058]: I1014 06:51:04.010820 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc"} Oct 14 06:51:04 crc kubenswrapper[5058]: I1014 06:51:04.010852 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"b4c0bd0acf196b268ef3cd7ffdddedde7202a814ba749856184d02a1c1629533"} Oct 14 06:51:24 crc kubenswrapper[5058]: I1014 06:51:24.671964 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9snxr"] Oct 14 06:51:49 crc kubenswrapper[5058]: I1014 06:51:49.709439 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" podUID="5deb3681-f574-40e7-b0dd-3048d164de55" containerName="oauth-openshift" containerID="cri-o://41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645" gracePeriod=15 Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.142760 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.188528 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9d745f8b5-mnxvj"] Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.188987 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c60cea-5e01-49f9-9c42-cb239acad6d4" containerName="pruner" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189028 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c60cea-5e01-49f9-9c42-cb239acad6d4" containerName="pruner" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189060 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerName="extract-content" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189078 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerName="extract-content" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189105 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189122 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189147 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d26ee15-b13a-4ab0-9eea-d0af5a89c89e" containerName="pruner" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189163 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d26ee15-b13a-4ab0-9eea-d0af5a89c89e" containerName="pruner" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189182 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189198 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189226 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerName="extract-utilities" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189241 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerName="extract-utilities" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189266 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5deb3681-f574-40e7-b0dd-3048d164de55" containerName="oauth-openshift" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189283 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5deb3681-f574-40e7-b0dd-3048d164de55" containerName="oauth-openshift" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189305 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerName="extract-content" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189320 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerName="extract-content" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189348 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerName="extract-utilities" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189366 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerName="extract-utilities" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189383 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372c99e2-16a2-46d7-8712-042849d7827b" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189399 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="372c99e2-16a2-46d7-8712-042849d7827b" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189417 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372c99e2-16a2-46d7-8712-042849d7827b" containerName="extract-utilities" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189432 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="372c99e2-16a2-46d7-8712-042849d7827b" containerName="extract-utilities" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189450 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372c99e2-16a2-46d7-8712-042849d7827b" containerName="extract-content" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189465 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="372c99e2-16a2-46d7-8712-042849d7827b" containerName="extract-content" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189494 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerName="extract-content" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189510 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerName="extract-content" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189532 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189549 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.189569 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerName="extract-utilities" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189585 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerName="extract-utilities" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189841 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5deb3681-f574-40e7-b0dd-3048d164de55" containerName="oauth-openshift" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189876 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d26ee15-b13a-4ab0-9eea-d0af5a89c89e" containerName="pruner" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189902 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c60cea-5e01-49f9-9c42-cb239acad6d4" containerName="pruner" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189926 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3391b890-848a-4d4b-9bd8-9ca2410e8961" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189949 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="73f1fe50-6279-4e7b-9031-9fa78a3dbcb3" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189973 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="372c99e2-16a2-46d7-8712-042849d7827b" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.189993 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="77bee3fc-c834-41b1-9f18-9d86071ee7c0" containerName="registry-server" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.190752 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.210031 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9d745f8b5-mnxvj"] Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.230623 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-login\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.230757 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5deb3681-f574-40e7-b0dd-3048d164de55-audit-dir\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.230883 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5deb3681-f574-40e7-b0dd-3048d164de55-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.230893 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-trusted-ca-bundle\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.230977 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-error\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231037 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-router-certs\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231078 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4z65\" (UniqueName: \"kubernetes.io/projected/5deb3681-f574-40e7-b0dd-3048d164de55-kube-api-access-q4z65\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231115 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-service-ca\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231157 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-ocp-branding-template\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231208 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-session\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231253 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-cliconfig\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231293 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-serving-cert\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231331 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-audit-policies\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231364 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-provider-selection\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231405 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-idp-0-file-data\") pod \"5deb3681-f574-40e7-b0dd-3048d164de55\" (UID: \"5deb3681-f574-40e7-b0dd-3048d164de55\") " Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231579 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-audit-dir\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231665 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-template-login\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231717 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bdf6\" (UniqueName: \"kubernetes.io/projected/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-kube-api-access-7bdf6\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231751 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231820 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231860 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231898 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-service-ca\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231929 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-audit-policies\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.231972 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.232039 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.232880 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-session\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.232950 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-router-certs\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.232988 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.233030 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-template-error\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.233069 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.233125 5058 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5deb3681-f574-40e7-b0dd-3048d164de55-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.233146 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.234763 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.235017 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.235343 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.240044 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.240392 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.240706 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.241101 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.248561 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.248696 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5deb3681-f574-40e7-b0dd-3048d164de55-kube-api-access-q4z65" (OuterVolumeSpecName: "kube-api-access-q4z65") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "kube-api-access-q4z65". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.249100 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.254600 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.254831 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5deb3681-f574-40e7-b0dd-3048d164de55" (UID: "5deb3681-f574-40e7-b0dd-3048d164de55"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.331327 5058 generic.go:334] "Generic (PLEG): container finished" podID="5deb3681-f574-40e7-b0dd-3048d164de55" containerID="41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645" exitCode=0 Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.331386 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" event={"ID":"5deb3681-f574-40e7-b0dd-3048d164de55","Type":"ContainerDied","Data":"41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645"} Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.331422 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" event={"ID":"5deb3681-f574-40e7-b0dd-3048d164de55","Type":"ContainerDied","Data":"4062158c22a4c9e7040f654dc1ff311ab919f762809f362d76625620ea1d6e89"} Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.331445 5058 scope.go:117] "RemoveContainer" containerID="41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.331868 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9snxr" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.333712 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-service-ca\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.333773 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-audit-policies\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.333996 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334037 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-session\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334398 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-router-certs\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334436 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334481 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-template-error\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334525 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334555 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-audit-dir\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334592 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-template-login\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334642 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334673 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bdf6\" (UniqueName: \"kubernetes.io/projected/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-kube-api-access-7bdf6\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334715 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334750 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334839 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334863 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334917 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.334987 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4z65\" (UniqueName: \"kubernetes.io/projected/5deb3681-f574-40e7-b0dd-3048d164de55-kube-api-access-q4z65\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335037 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335061 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335096 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-audit-policies\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335115 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335136 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335153 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335174 5058 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5deb3681-f574-40e7-b0dd-3048d164de55-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335191 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335209 5058 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5deb3681-f574-40e7-b0dd-3048d164de55-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.335052 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-audit-dir\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.336642 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.337235 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-service-ca\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.337954 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.341034 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.341049 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.341124 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-template-error\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.341371 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-session\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.342858 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-template-login\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.343359 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.344620 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-router-certs\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.344950 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.359159 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bdf6\" (UniqueName: \"kubernetes.io/projected/1b536ee2-cc25-4c56-ae3e-4ad7792463b8-kube-api-access-7bdf6\") pod \"oauth-openshift-9d745f8b5-mnxvj\" (UID: \"1b536ee2-cc25-4c56-ae3e-4ad7792463b8\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.363708 5058 scope.go:117] "RemoveContainer" containerID="41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645" Oct 14 06:51:50 crc kubenswrapper[5058]: E1014 06:51:50.364593 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645\": container with ID starting with 41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645 not found: ID does not exist" containerID="41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.364630 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645"} err="failed to get container status \"41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645\": rpc error: code = NotFound desc = could not find container \"41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645\": container with ID starting with 41f6b0f184e91c373b860106fbf66a68f94dd296a24674a37667e3e65789e645 not found: ID does not exist" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.376230 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9snxr"] Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.381754 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9snxr"] Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.515524 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.799462 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5deb3681-f574-40e7-b0dd-3048d164de55" path="/var/lib/kubelet/pods/5deb3681-f574-40e7-b0dd-3048d164de55/volumes" Oct 14 06:51:50 crc kubenswrapper[5058]: I1014 06:51:50.985231 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9d745f8b5-mnxvj"] Oct 14 06:51:51 crc kubenswrapper[5058]: I1014 06:51:51.341083 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" event={"ID":"1b536ee2-cc25-4c56-ae3e-4ad7792463b8","Type":"ContainerStarted","Data":"b39c09f799f64951d7745b9bfbd118b8d22007320bcd5b979fd560eb5cfff877"} Oct 14 06:51:52 crc kubenswrapper[5058]: I1014 06:51:52.352273 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" event={"ID":"1b536ee2-cc25-4c56-ae3e-4ad7792463b8","Type":"ContainerStarted","Data":"0ced00222232b8b9ecf8b7dae1477965eb6c01e13390d05d0b8f274963f80fdd"} Oct 14 06:51:52 crc kubenswrapper[5058]: I1014 06:51:52.352893 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:52 crc kubenswrapper[5058]: I1014 06:51:52.363583 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" Oct 14 06:51:52 crc kubenswrapper[5058]: I1014 06:51:52.387458 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9d745f8b5-mnxvj" podStartSLOduration=28.387427837 podStartE2EDuration="28.387427837s" podCreationTimestamp="2025-10-14 06:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:51:52.383239903 +0000 UTC m=+260.294323749" watchObservedRunningTime="2025-10-14 06:51:52.387427837 +0000 UTC m=+260.298511673" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.128500 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mxl4f"] Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.129145 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mxl4f" podUID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerName="registry-server" containerID="cri-o://672d4ce31541803d34f630564652050fcd13ebc723a031f46e18c2b96fed81c1" gracePeriod=30 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.149360 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mq7b7"] Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.149624 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mq7b7" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerName="registry-server" containerID="cri-o://eea888bb699198fe1e0d14b0fce712995845767c0dc4523e365144b839922df2" gracePeriod=30 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.160638 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s5659"] Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.160890 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" podUID="21155744-ed89-42cc-b09d-cd8dd1896e6d" containerName="marketplace-operator" containerID="cri-o://3bcc1f8873dcead744dc617d7f831f70b9bcc752ad19d7e4a49ec5695d5fc584" gracePeriod=30 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.175419 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpjq"] Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.176371 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwpjq" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" containerName="registry-server" containerID="cri-o://f85392f2160b60a7adb0a729b48b6e3da74acd2bd8e423b7a1d5c35d30eef559" gracePeriod=30 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.191446 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcl8t"] Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.191662 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hcl8t" podUID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerName="registry-server" containerID="cri-o://5e322f23d26cb88857e1313a75a33835bc61cf46db3e3e5ec32059c2d4bf45c1" gracePeriod=30 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.223314 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-plv9m"] Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.224305 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.239147 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-plv9m"] Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.297056 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2t9z\" (UniqueName: \"kubernetes.io/projected/b2e15c09-cc07-43fe-b046-d0c5406a82e0-kube-api-access-g2t9z\") pod \"marketplace-operator-79b997595-plv9m\" (UID: \"b2e15c09-cc07-43fe-b046-d0c5406a82e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.297244 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b2e15c09-cc07-43fe-b046-d0c5406a82e0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-plv9m\" (UID: \"b2e15c09-cc07-43fe-b046-d0c5406a82e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.297338 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2e15c09-cc07-43fe-b046-d0c5406a82e0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-plv9m\" (UID: \"b2e15c09-cc07-43fe-b046-d0c5406a82e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.398529 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2t9z\" (UniqueName: \"kubernetes.io/projected/b2e15c09-cc07-43fe-b046-d0c5406a82e0-kube-api-access-g2t9z\") pod \"marketplace-operator-79b997595-plv9m\" (UID: \"b2e15c09-cc07-43fe-b046-d0c5406a82e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.398595 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b2e15c09-cc07-43fe-b046-d0c5406a82e0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-plv9m\" (UID: \"b2e15c09-cc07-43fe-b046-d0c5406a82e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.398631 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2e15c09-cc07-43fe-b046-d0c5406a82e0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-plv9m\" (UID: \"b2e15c09-cc07-43fe-b046-d0c5406a82e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.400413 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2e15c09-cc07-43fe-b046-d0c5406a82e0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-plv9m\" (UID: \"b2e15c09-cc07-43fe-b046-d0c5406a82e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.409016 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b2e15c09-cc07-43fe-b046-d0c5406a82e0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-plv9m\" (UID: \"b2e15c09-cc07-43fe-b046-d0c5406a82e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.415505 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2t9z\" (UniqueName: \"kubernetes.io/projected/b2e15c09-cc07-43fe-b046-d0c5406a82e0-kube-api-access-g2t9z\") pod \"marketplace-operator-79b997595-plv9m\" (UID: \"b2e15c09-cc07-43fe-b046-d0c5406a82e0\") " pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.459176 5058 generic.go:334] "Generic (PLEG): container finished" podID="21155744-ed89-42cc-b09d-cd8dd1896e6d" containerID="3bcc1f8873dcead744dc617d7f831f70b9bcc752ad19d7e4a49ec5695d5fc584" exitCode=0 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.459371 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" event={"ID":"21155744-ed89-42cc-b09d-cd8dd1896e6d","Type":"ContainerDied","Data":"3bcc1f8873dcead744dc617d7f831f70b9bcc752ad19d7e4a49ec5695d5fc584"} Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.462496 5058 generic.go:334] "Generic (PLEG): container finished" podID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerID="eea888bb699198fe1e0d14b0fce712995845767c0dc4523e365144b839922df2" exitCode=0 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.462548 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq7b7" event={"ID":"e120bbf5-4072-4a5a-ae6f-e3aff762b987","Type":"ContainerDied","Data":"eea888bb699198fe1e0d14b0fce712995845767c0dc4523e365144b839922df2"} Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.466165 5058 generic.go:334] "Generic (PLEG): container finished" podID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerID="5e322f23d26cb88857e1313a75a33835bc61cf46db3e3e5ec32059c2d4bf45c1" exitCode=0 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.466225 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcl8t" event={"ID":"c60d1a52-405f-4ae4-afc1-9080f1e28893","Type":"ContainerDied","Data":"5e322f23d26cb88857e1313a75a33835bc61cf46db3e3e5ec32059c2d4bf45c1"} Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.471425 5058 generic.go:334] "Generic (PLEG): container finished" podID="f1da2131-20c0-4609-b12a-098253e6b89c" containerID="f85392f2160b60a7adb0a729b48b6e3da74acd2bd8e423b7a1d5c35d30eef559" exitCode=0 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.471473 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpjq" event={"ID":"f1da2131-20c0-4609-b12a-098253e6b89c","Type":"ContainerDied","Data":"f85392f2160b60a7adb0a729b48b6e3da74acd2bd8e423b7a1d5c35d30eef559"} Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.474902 5058 generic.go:334] "Generic (PLEG): container finished" podID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerID="672d4ce31541803d34f630564652050fcd13ebc723a031f46e18c2b96fed81c1" exitCode=0 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.474939 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxl4f" event={"ID":"921a9387-d8e1-43f4-9415-f47bbb67e9e5","Type":"ContainerDied","Data":"672d4ce31541803d34f630564652050fcd13ebc723a031f46e18c2b96fed81c1"} Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.577478 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.580713 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.588823 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.593112 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.615632 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.616940 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.701748 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q8dx\" (UniqueName: \"kubernetes.io/projected/e120bbf5-4072-4a5a-ae6f-e3aff762b987-kube-api-access-6q8dx\") pod \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.701787 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-operator-metrics\") pod \"21155744-ed89-42cc-b09d-cd8dd1896e6d\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.701868 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94vzg\" (UniqueName: \"kubernetes.io/projected/f1da2131-20c0-4609-b12a-098253e6b89c-kube-api-access-94vzg\") pod \"f1da2131-20c0-4609-b12a-098253e6b89c\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.701907 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gscfg\" (UniqueName: \"kubernetes.io/projected/21155744-ed89-42cc-b09d-cd8dd1896e6d-kube-api-access-gscfg\") pod \"21155744-ed89-42cc-b09d-cd8dd1896e6d\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.701938 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-utilities\") pod \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.701967 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-trusted-ca\") pod \"21155744-ed89-42cc-b09d-cd8dd1896e6d\" (UID: \"21155744-ed89-42cc-b09d-cd8dd1896e6d\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.702010 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqdqs\" (UniqueName: \"kubernetes.io/projected/c60d1a52-405f-4ae4-afc1-9080f1e28893-kube-api-access-vqdqs\") pod \"c60d1a52-405f-4ae4-afc1-9080f1e28893\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.702047 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-utilities\") pod \"c60d1a52-405f-4ae4-afc1-9080f1e28893\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.702099 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-catalog-content\") pod \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\" (UID: \"e120bbf5-4072-4a5a-ae6f-e3aff762b987\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.702131 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-utilities\") pod \"f1da2131-20c0-4609-b12a-098253e6b89c\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.702171 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-catalog-content\") pod \"f1da2131-20c0-4609-b12a-098253e6b89c\" (UID: \"f1da2131-20c0-4609-b12a-098253e6b89c\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.702193 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-catalog-content\") pod \"c60d1a52-405f-4ae4-afc1-9080f1e28893\" (UID: \"c60d1a52-405f-4ae4-afc1-9080f1e28893\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.702208 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-catalog-content\") pod \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.702259 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-utilities\") pod \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.702275 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jlhm\" (UniqueName: \"kubernetes.io/projected/921a9387-d8e1-43f4-9415-f47bbb67e9e5-kube-api-access-9jlhm\") pod \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\" (UID: \"921a9387-d8e1-43f4-9415-f47bbb67e9e5\") " Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.703923 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-utilities" (OuterVolumeSpecName: "utilities") pod "921a9387-d8e1-43f4-9415-f47bbb67e9e5" (UID: "921a9387-d8e1-43f4-9415-f47bbb67e9e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.706299 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1da2131-20c0-4609-b12a-098253e6b89c-kube-api-access-94vzg" (OuterVolumeSpecName: "kube-api-access-94vzg") pod "f1da2131-20c0-4609-b12a-098253e6b89c" (UID: "f1da2131-20c0-4609-b12a-098253e6b89c"). InnerVolumeSpecName "kube-api-access-94vzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.707311 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-utilities" (OuterVolumeSpecName: "utilities") pod "c60d1a52-405f-4ae4-afc1-9080f1e28893" (UID: "c60d1a52-405f-4ae4-afc1-9080f1e28893"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.707421 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-utilities" (OuterVolumeSpecName: "utilities") pod "f1da2131-20c0-4609-b12a-098253e6b89c" (UID: "f1da2131-20c0-4609-b12a-098253e6b89c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.707769 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "21155744-ed89-42cc-b09d-cd8dd1896e6d" (UID: "21155744-ed89-42cc-b09d-cd8dd1896e6d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.708076 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60d1a52-405f-4ae4-afc1-9080f1e28893-kube-api-access-vqdqs" (OuterVolumeSpecName: "kube-api-access-vqdqs") pod "c60d1a52-405f-4ae4-afc1-9080f1e28893" (UID: "c60d1a52-405f-4ae4-afc1-9080f1e28893"). InnerVolumeSpecName "kube-api-access-vqdqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.708196 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "21155744-ed89-42cc-b09d-cd8dd1896e6d" (UID: "21155744-ed89-42cc-b09d-cd8dd1896e6d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.708329 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-utilities" (OuterVolumeSpecName: "utilities") pod "e120bbf5-4072-4a5a-ae6f-e3aff762b987" (UID: "e120bbf5-4072-4a5a-ae6f-e3aff762b987"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.708339 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21155744-ed89-42cc-b09d-cd8dd1896e6d-kube-api-access-gscfg" (OuterVolumeSpecName: "kube-api-access-gscfg") pod "21155744-ed89-42cc-b09d-cd8dd1896e6d" (UID: "21155744-ed89-42cc-b09d-cd8dd1896e6d"). InnerVolumeSpecName "kube-api-access-gscfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.709425 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e120bbf5-4072-4a5a-ae6f-e3aff762b987-kube-api-access-6q8dx" (OuterVolumeSpecName: "kube-api-access-6q8dx") pod "e120bbf5-4072-4a5a-ae6f-e3aff762b987" (UID: "e120bbf5-4072-4a5a-ae6f-e3aff762b987"). InnerVolumeSpecName "kube-api-access-6q8dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.709954 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921a9387-d8e1-43f4-9415-f47bbb67e9e5-kube-api-access-9jlhm" (OuterVolumeSpecName: "kube-api-access-9jlhm") pod "921a9387-d8e1-43f4-9415-f47bbb67e9e5" (UID: "921a9387-d8e1-43f4-9415-f47bbb67e9e5"). InnerVolumeSpecName "kube-api-access-9jlhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.725080 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1da2131-20c0-4609-b12a-098253e6b89c" (UID: "f1da2131-20c0-4609-b12a-098253e6b89c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.755920 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "921a9387-d8e1-43f4-9415-f47bbb67e9e5" (UID: "921a9387-d8e1-43f4-9415-f47bbb67e9e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.792657 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e120bbf5-4072-4a5a-ae6f-e3aff762b987" (UID: "e120bbf5-4072-4a5a-ae6f-e3aff762b987"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805639 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q8dx\" (UniqueName: \"kubernetes.io/projected/e120bbf5-4072-4a5a-ae6f-e3aff762b987-kube-api-access-6q8dx\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805673 5058 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805686 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94vzg\" (UniqueName: \"kubernetes.io/projected/f1da2131-20c0-4609-b12a-098253e6b89c-kube-api-access-94vzg\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805696 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gscfg\" (UniqueName: \"kubernetes.io/projected/21155744-ed89-42cc-b09d-cd8dd1896e6d-kube-api-access-gscfg\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805708 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805722 5058 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21155744-ed89-42cc-b09d-cd8dd1896e6d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805732 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqdqs\" (UniqueName: \"kubernetes.io/projected/c60d1a52-405f-4ae4-afc1-9080f1e28893-kube-api-access-vqdqs\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805741 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805751 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e120bbf5-4072-4a5a-ae6f-e3aff762b987-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805760 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805770 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1da2131-20c0-4609-b12a-098253e6b89c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805780 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805794 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921a9387-d8e1-43f4-9415-f47bbb67e9e5-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.805818 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jlhm\" (UniqueName: \"kubernetes.io/projected/921a9387-d8e1-43f4-9415-f47bbb67e9e5-kube-api-access-9jlhm\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.821517 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-plv9m"] Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.822786 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c60d1a52-405f-4ae4-afc1-9080f1e28893" (UID: "c60d1a52-405f-4ae4-afc1-9080f1e28893"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:52:09 crc kubenswrapper[5058]: W1014 06:52:09.826371 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2e15c09_cc07_43fe_b046_d0c5406a82e0.slice/crio-f78f43e545726800f48bacb503464ea997eca2a43314544a6e15750ff1b3ef40 WatchSource:0}: Error finding container f78f43e545726800f48bacb503464ea997eca2a43314544a6e15750ff1b3ef40: Status 404 returned error can't find the container with id f78f43e545726800f48bacb503464ea997eca2a43314544a6e15750ff1b3ef40 Oct 14 06:52:09 crc kubenswrapper[5058]: I1014 06:52:09.906600 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60d1a52-405f-4ae4-afc1-9080f1e28893-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.482078 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq7b7" event={"ID":"e120bbf5-4072-4a5a-ae6f-e3aff762b987","Type":"ContainerDied","Data":"6f47c98a355c394a3059fca19918422f75f4382b781832d3371283179b997b99"} Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.482192 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq7b7" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.482358 5058 scope.go:117] "RemoveContainer" containerID="eea888bb699198fe1e0d14b0fce712995845767c0dc4523e365144b839922df2" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.484177 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcl8t" event={"ID":"c60d1a52-405f-4ae4-afc1-9080f1e28893","Type":"ContainerDied","Data":"21144128a2f1eb68bb9e0bc34a0ae8c1aabceeed1fb841ee3e6b23b32ad79171"} Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.484255 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcl8t" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.488250 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" event={"ID":"b2e15c09-cc07-43fe-b046-d0c5406a82e0","Type":"ContainerStarted","Data":"61dde67fe45ea72ea124e943736c1a3e4af3ff9560625f6229fc6733b7fe3010"} Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.488292 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" event={"ID":"b2e15c09-cc07-43fe-b046-d0c5406a82e0","Type":"ContainerStarted","Data":"f78f43e545726800f48bacb503464ea997eca2a43314544a6e15750ff1b3ef40"} Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.488812 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.491709 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.492639 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwpjq" event={"ID":"f1da2131-20c0-4609-b12a-098253e6b89c","Type":"ContainerDied","Data":"9072f4f913a1069139eb9a2ed59ee0fed4d8301566a11b6993e8d7ec196c188b"} Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.492679 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwpjq" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.497080 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxl4f" event={"ID":"921a9387-d8e1-43f4-9415-f47bbb67e9e5","Type":"ContainerDied","Data":"740792567eefd6daec0d25a2314848700a86596cfbce1bb1df870755ac37aba9"} Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.497142 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxl4f" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.500375 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" event={"ID":"21155744-ed89-42cc-b09d-cd8dd1896e6d","Type":"ContainerDied","Data":"1d372d2df92dc30d8a9158b50037685972db1f11e470bf3b88916a7a2612b889"} Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.500544 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s5659" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.505564 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-plv9m" podStartSLOduration=1.505547284 podStartE2EDuration="1.505547284s" podCreationTimestamp="2025-10-14 06:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:52:10.503170062 +0000 UTC m=+278.414253868" watchObservedRunningTime="2025-10-14 06:52:10.505547284 +0000 UTC m=+278.416631090" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.506162 5058 scope.go:117] "RemoveContainer" containerID="8ac15b3afde645d2c6bb5e0ef6ff7eb7f3a0796561e498e4e7a5da44a44bcf20" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.541721 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mq7b7"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.549012 5058 scope.go:117] "RemoveContainer" containerID="f9e60b4c3e2a398affc49f964e93c0262c7128014ee5db259089fa097af0a850" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.549487 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mq7b7"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.569636 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcl8t"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.572106 5058 scope.go:117] "RemoveContainer" containerID="5e322f23d26cb88857e1313a75a33835bc61cf46db3e3e5ec32059c2d4bf45c1" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.573381 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hcl8t"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.585441 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpjq"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.588940 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwpjq"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.597537 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mxl4f"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.599593 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mxl4f"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.603946 5058 scope.go:117] "RemoveContainer" containerID="c56b71401e5de423183353d8264cf225bd1d41ba79de11a9a602ac1e5655cf1d" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.612270 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s5659"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.612445 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s5659"] Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.618778 5058 scope.go:117] "RemoveContainer" containerID="f278809a083fa8229ca52b5921cb0b808f99dd5c03a66324a354aeebcf9f616f" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.633028 5058 scope.go:117] "RemoveContainer" containerID="f85392f2160b60a7adb0a729b48b6e3da74acd2bd8e423b7a1d5c35d30eef559" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.646388 5058 scope.go:117] "RemoveContainer" containerID="120c6b566e3fab6073b11cce2fbb2ce4eafdc364c924afa00ccb2c250e67738d" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.660473 5058 scope.go:117] "RemoveContainer" containerID="2aa72ba686912a1261af7b38890d6728d07682ffdde98a99430084d7e41aa889" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.673932 5058 scope.go:117] "RemoveContainer" containerID="672d4ce31541803d34f630564652050fcd13ebc723a031f46e18c2b96fed81c1" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.694258 5058 scope.go:117] "RemoveContainer" containerID="7de781a425856bf18a6326d991ee9b127e10cda15a2b8730eeffb0e0a25850ec" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.721060 5058 scope.go:117] "RemoveContainer" containerID="32ff0e875595e2d8f5bdcbe3452c5bfc57a979dd078fd411be3a370f4ac6baae" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.731826 5058 scope.go:117] "RemoveContainer" containerID="3bcc1f8873dcead744dc617d7f831f70b9bcc752ad19d7e4a49ec5695d5fc584" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.795854 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21155744-ed89-42cc-b09d-cd8dd1896e6d" path="/var/lib/kubelet/pods/21155744-ed89-42cc-b09d-cd8dd1896e6d/volumes" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.796494 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" path="/var/lib/kubelet/pods/921a9387-d8e1-43f4-9415-f47bbb67e9e5/volumes" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.797259 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c60d1a52-405f-4ae4-afc1-9080f1e28893" path="/var/lib/kubelet/pods/c60d1a52-405f-4ae4-afc1-9080f1e28893/volumes" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.798526 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" path="/var/lib/kubelet/pods/e120bbf5-4072-4a5a-ae6f-e3aff762b987/volumes" Oct 14 06:52:10 crc kubenswrapper[5058]: I1014 06:52:10.799277 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" path="/var/lib/kubelet/pods/f1da2131-20c0-4609-b12a-098253e6b89c/volumes" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.351707 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zmx7f"] Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352150 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" containerName="extract-utilities" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352160 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" containerName="extract-utilities" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352170 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerName="extract-content" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352175 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerName="extract-content" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352181 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352200 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352213 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352219 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352225 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" containerName="extract-content" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352230 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" containerName="extract-content" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352239 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerName="extract-content" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352244 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerName="extract-content" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352254 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerName="extract-utilities" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352259 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerName="extract-utilities" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352280 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerName="extract-content" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352285 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerName="extract-content" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352291 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21155744-ed89-42cc-b09d-cd8dd1896e6d" containerName="marketplace-operator" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352296 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="21155744-ed89-42cc-b09d-cd8dd1896e6d" containerName="marketplace-operator" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352304 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerName="extract-utilities" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352309 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerName="extract-utilities" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352319 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerName="extract-utilities" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352325 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerName="extract-utilities" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352330 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352335 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: E1014 06:52:11.352344 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352349 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352443 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e120bbf5-4072-4a5a-ae6f-e3aff762b987" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352454 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60d1a52-405f-4ae4-afc1-9080f1e28893" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352462 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="921a9387-d8e1-43f4-9415-f47bbb67e9e5" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352469 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="21155744-ed89-42cc-b09d-cd8dd1896e6d" containerName="marketplace-operator" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.352478 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1da2131-20c0-4609-b12a-098253e6b89c" containerName="registry-server" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.353124 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.357264 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.373488 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmx7f"] Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.423025 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlpht\" (UniqueName: \"kubernetes.io/projected/ef40d6fd-414a-40e4-b121-d1b18ad98e6e-kube-api-access-zlpht\") pod \"redhat-marketplace-zmx7f\" (UID: \"ef40d6fd-414a-40e4-b121-d1b18ad98e6e\") " pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.423111 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef40d6fd-414a-40e4-b121-d1b18ad98e6e-catalog-content\") pod \"redhat-marketplace-zmx7f\" (UID: \"ef40d6fd-414a-40e4-b121-d1b18ad98e6e\") " pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.423263 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef40d6fd-414a-40e4-b121-d1b18ad98e6e-utilities\") pod \"redhat-marketplace-zmx7f\" (UID: \"ef40d6fd-414a-40e4-b121-d1b18ad98e6e\") " pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.524165 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef40d6fd-414a-40e4-b121-d1b18ad98e6e-catalog-content\") pod \"redhat-marketplace-zmx7f\" (UID: \"ef40d6fd-414a-40e4-b121-d1b18ad98e6e\") " pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.524263 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef40d6fd-414a-40e4-b121-d1b18ad98e6e-utilities\") pod \"redhat-marketplace-zmx7f\" (UID: \"ef40d6fd-414a-40e4-b121-d1b18ad98e6e\") " pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.524322 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlpht\" (UniqueName: \"kubernetes.io/projected/ef40d6fd-414a-40e4-b121-d1b18ad98e6e-kube-api-access-zlpht\") pod \"redhat-marketplace-zmx7f\" (UID: \"ef40d6fd-414a-40e4-b121-d1b18ad98e6e\") " pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.524850 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef40d6fd-414a-40e4-b121-d1b18ad98e6e-utilities\") pod \"redhat-marketplace-zmx7f\" (UID: \"ef40d6fd-414a-40e4-b121-d1b18ad98e6e\") " pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.526702 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef40d6fd-414a-40e4-b121-d1b18ad98e6e-catalog-content\") pod \"redhat-marketplace-zmx7f\" (UID: \"ef40d6fd-414a-40e4-b121-d1b18ad98e6e\") " pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.549865 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7zknx"] Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.550738 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.555205 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.560833 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlpht\" (UniqueName: \"kubernetes.io/projected/ef40d6fd-414a-40e4-b121-d1b18ad98e6e-kube-api-access-zlpht\") pod \"redhat-marketplace-zmx7f\" (UID: \"ef40d6fd-414a-40e4-b121-d1b18ad98e6e\") " pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.564485 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zknx"] Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.625196 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-catalog-content\") pod \"redhat-operators-7zknx\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.625301 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-utilities\") pod \"redhat-operators-7zknx\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.625439 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jnf4\" (UniqueName: \"kubernetes.io/projected/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-kube-api-access-5jnf4\") pod \"redhat-operators-7zknx\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.674128 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.726545 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-catalog-content\") pod \"redhat-operators-7zknx\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.726645 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-utilities\") pod \"redhat-operators-7zknx\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.726758 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jnf4\" (UniqueName: \"kubernetes.io/projected/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-kube-api-access-5jnf4\") pod \"redhat-operators-7zknx\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.727055 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-catalog-content\") pod \"redhat-operators-7zknx\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.727260 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-utilities\") pod \"redhat-operators-7zknx\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.747937 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jnf4\" (UniqueName: \"kubernetes.io/projected/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-kube-api-access-5jnf4\") pod \"redhat-operators-7zknx\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.852661 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmx7f"] Oct 14 06:52:11 crc kubenswrapper[5058]: W1014 06:52:11.859457 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef40d6fd_414a_40e4_b121_d1b18ad98e6e.slice/crio-7e994fe1f2fa54cf56e71a7918e0fcdc20fd0df7b83fa46da66bde1a8370b462 WatchSource:0}: Error finding container 7e994fe1f2fa54cf56e71a7918e0fcdc20fd0df7b83fa46da66bde1a8370b462: Status 404 returned error can't find the container with id 7e994fe1f2fa54cf56e71a7918e0fcdc20fd0df7b83fa46da66bde1a8370b462 Oct 14 06:52:11 crc kubenswrapper[5058]: I1014 06:52:11.883831 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:12 crc kubenswrapper[5058]: I1014 06:52:12.271385 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zknx"] Oct 14 06:52:12 crc kubenswrapper[5058]: W1014 06:52:12.281243 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb90f51c3_20e7_46f3_af7f_05ac7d41bb66.slice/crio-7d10c5bce4771ee145097a5227f47a2938f87cebe50e4b1c1abb3aad893c5328 WatchSource:0}: Error finding container 7d10c5bce4771ee145097a5227f47a2938f87cebe50e4b1c1abb3aad893c5328: Status 404 returned error can't find the container with id 7d10c5bce4771ee145097a5227f47a2938f87cebe50e4b1c1abb3aad893c5328 Oct 14 06:52:12 crc kubenswrapper[5058]: I1014 06:52:12.517308 5058 generic.go:334] "Generic (PLEG): container finished" podID="ef40d6fd-414a-40e4-b121-d1b18ad98e6e" containerID="bcf9b349d69f692a16c29f11f72b2ba5d60679db26035780162d7a3455d8831a" exitCode=0 Oct 14 06:52:12 crc kubenswrapper[5058]: I1014 06:52:12.517536 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmx7f" event={"ID":"ef40d6fd-414a-40e4-b121-d1b18ad98e6e","Type":"ContainerDied","Data":"bcf9b349d69f692a16c29f11f72b2ba5d60679db26035780162d7a3455d8831a"} Oct 14 06:52:12 crc kubenswrapper[5058]: I1014 06:52:12.517839 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmx7f" event={"ID":"ef40d6fd-414a-40e4-b121-d1b18ad98e6e","Type":"ContainerStarted","Data":"7e994fe1f2fa54cf56e71a7918e0fcdc20fd0df7b83fa46da66bde1a8370b462"} Oct 14 06:52:12 crc kubenswrapper[5058]: I1014 06:52:12.520506 5058 generic.go:334] "Generic (PLEG): container finished" podID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerID="4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026" exitCode=0 Oct 14 06:52:12 crc kubenswrapper[5058]: I1014 06:52:12.520600 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zknx" event={"ID":"b90f51c3-20e7-46f3-af7f-05ac7d41bb66","Type":"ContainerDied","Data":"4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026"} Oct 14 06:52:12 crc kubenswrapper[5058]: I1014 06:52:12.520625 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zknx" event={"ID":"b90f51c3-20e7-46f3-af7f-05ac7d41bb66","Type":"ContainerStarted","Data":"7d10c5bce4771ee145097a5227f47a2938f87cebe50e4b1c1abb3aad893c5328"} Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.530005 5058 generic.go:334] "Generic (PLEG): container finished" podID="ef40d6fd-414a-40e4-b121-d1b18ad98e6e" containerID="eab8e97de26999ca9ba779eff104c33d920a75aeb870dc9af40fe39a7e5f7fc1" exitCode=0 Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.530181 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmx7f" event={"ID":"ef40d6fd-414a-40e4-b121-d1b18ad98e6e","Type":"ContainerDied","Data":"eab8e97de26999ca9ba779eff104c33d920a75aeb870dc9af40fe39a7e5f7fc1"} Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.536047 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zknx" event={"ID":"b90f51c3-20e7-46f3-af7f-05ac7d41bb66","Type":"ContainerStarted","Data":"f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6"} Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.750716 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c7m7z"] Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.752098 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.753390 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.763899 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7m7z"] Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.865173 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6rvz\" (UniqueName: \"kubernetes.io/projected/04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c-kube-api-access-c6rvz\") pod \"certified-operators-c7m7z\" (UID: \"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c\") " pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.865421 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c-catalog-content\") pod \"certified-operators-c7m7z\" (UID: \"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c\") " pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.865759 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c-utilities\") pod \"certified-operators-c7m7z\" (UID: \"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c\") " pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.951163 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgh6z"] Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.952012 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.955137 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.961897 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgh6z"] Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.978364 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6rvz\" (UniqueName: \"kubernetes.io/projected/04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c-kube-api-access-c6rvz\") pod \"certified-operators-c7m7z\" (UID: \"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c\") " pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.978466 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c-catalog-content\") pod \"certified-operators-c7m7z\" (UID: \"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c\") " pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.978560 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c-utilities\") pod \"certified-operators-c7m7z\" (UID: \"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c\") " pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.979033 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c-utilities\") pod \"certified-operators-c7m7z\" (UID: \"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c\") " pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:13 crc kubenswrapper[5058]: I1014 06:52:13.979286 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c-catalog-content\") pod \"certified-operators-c7m7z\" (UID: \"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c\") " pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.001273 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6rvz\" (UniqueName: \"kubernetes.io/projected/04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c-kube-api-access-c6rvz\") pod \"certified-operators-c7m7z\" (UID: \"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c\") " pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.077322 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.079341 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-utilities\") pod \"community-operators-qgh6z\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.079391 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-catalog-content\") pod \"community-operators-qgh6z\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.079441 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjlg\" (UniqueName: \"kubernetes.io/projected/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-kube-api-access-xnjlg\") pod \"community-operators-qgh6z\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.181182 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-utilities\") pod \"community-operators-qgh6z\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.181722 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-catalog-content\") pod \"community-operators-qgh6z\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.181933 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnjlg\" (UniqueName: \"kubernetes.io/projected/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-kube-api-access-xnjlg\") pod \"community-operators-qgh6z\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.183525 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-catalog-content\") pod \"community-operators-qgh6z\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.183927 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-utilities\") pod \"community-operators-qgh6z\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.214664 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnjlg\" (UniqueName: \"kubernetes.io/projected/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-kube-api-access-xnjlg\") pod \"community-operators-qgh6z\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.285451 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.289669 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7m7z"] Oct 14 06:52:14 crc kubenswrapper[5058]: W1014 06:52:14.307163 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04b1a95a_8628_4c1c_83d9_3d4ae2bafc3c.slice/crio-21e5c05b4b0d4bc9783cfa742f7235dfcd4edbfaa427d38027cfb383efe8b3c3 WatchSource:0}: Error finding container 21e5c05b4b0d4bc9783cfa742f7235dfcd4edbfaa427d38027cfb383efe8b3c3: Status 404 returned error can't find the container with id 21e5c05b4b0d4bc9783cfa742f7235dfcd4edbfaa427d38027cfb383efe8b3c3 Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.465846 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgh6z"] Oct 14 06:52:14 crc kubenswrapper[5058]: W1014 06:52:14.481026 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c1b4d67_6d7d_4656_80cd_ac114b08ffbe.slice/crio-31a9f51166d286d6eba1f211817033840266af88c44ea03357c18c64e6a397d2 WatchSource:0}: Error finding container 31a9f51166d286d6eba1f211817033840266af88c44ea03357c18c64e6a397d2: Status 404 returned error can't find the container with id 31a9f51166d286d6eba1f211817033840266af88c44ea03357c18c64e6a397d2 Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.543037 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgh6z" event={"ID":"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe","Type":"ContainerStarted","Data":"31a9f51166d286d6eba1f211817033840266af88c44ea03357c18c64e6a397d2"} Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.544972 5058 generic.go:334] "Generic (PLEG): container finished" podID="04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c" containerID="371572676e9f5fbd57d932f21e015038794c470a94a904436c3759fa22df26d1" exitCode=0 Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.545037 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m7z" event={"ID":"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c","Type":"ContainerDied","Data":"371572676e9f5fbd57d932f21e015038794c470a94a904436c3759fa22df26d1"} Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.545063 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m7z" event={"ID":"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c","Type":"ContainerStarted","Data":"21e5c05b4b0d4bc9783cfa742f7235dfcd4edbfaa427d38027cfb383efe8b3c3"} Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.550275 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmx7f" event={"ID":"ef40d6fd-414a-40e4-b121-d1b18ad98e6e","Type":"ContainerStarted","Data":"d093b153940e94b53e942fcee6ac6069642f9fa4de33a75df4593992cdc914f9"} Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.551993 5058 generic.go:334] "Generic (PLEG): container finished" podID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerID="f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6" exitCode=0 Oct 14 06:52:14 crc kubenswrapper[5058]: I1014 06:52:14.552019 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zknx" event={"ID":"b90f51c3-20e7-46f3-af7f-05ac7d41bb66","Type":"ContainerDied","Data":"f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6"} Oct 14 06:52:15 crc kubenswrapper[5058]: I1014 06:52:15.558892 5058 generic.go:334] "Generic (PLEG): container finished" podID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerID="a851e36534c0086221486e384e09cac4c458229f7266aa51c4964456a27c2969" exitCode=0 Oct 14 06:52:15 crc kubenswrapper[5058]: I1014 06:52:15.559091 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgh6z" event={"ID":"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe","Type":"ContainerDied","Data":"a851e36534c0086221486e384e09cac4c458229f7266aa51c4964456a27c2969"} Oct 14 06:52:15 crc kubenswrapper[5058]: I1014 06:52:15.564839 5058 generic.go:334] "Generic (PLEG): container finished" podID="04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c" containerID="0821cfd0a2429d88d7a369c34b7532cc13aa387a6c660924f31a5ff9e935874c" exitCode=0 Oct 14 06:52:15 crc kubenswrapper[5058]: I1014 06:52:15.564904 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m7z" event={"ID":"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c","Type":"ContainerDied","Data":"0821cfd0a2429d88d7a369c34b7532cc13aa387a6c660924f31a5ff9e935874c"} Oct 14 06:52:15 crc kubenswrapper[5058]: I1014 06:52:15.567412 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zknx" event={"ID":"b90f51c3-20e7-46f3-af7f-05ac7d41bb66","Type":"ContainerStarted","Data":"830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1"} Oct 14 06:52:15 crc kubenswrapper[5058]: I1014 06:52:15.578691 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zmx7f" podStartSLOduration=2.92286087 podStartE2EDuration="4.578661136s" podCreationTimestamp="2025-10-14 06:52:11 +0000 UTC" firstStartedPulling="2025-10-14 06:52:12.519677482 +0000 UTC m=+280.430761288" lastFinishedPulling="2025-10-14 06:52:14.175477738 +0000 UTC m=+282.086561554" observedRunningTime="2025-10-14 06:52:14.609043204 +0000 UTC m=+282.520127010" watchObservedRunningTime="2025-10-14 06:52:15.578661136 +0000 UTC m=+283.489744942" Oct 14 06:52:15 crc kubenswrapper[5058]: I1014 06:52:15.616970 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7zknx" podStartSLOduration=2.068491873 podStartE2EDuration="4.616954794s" podCreationTimestamp="2025-10-14 06:52:11 +0000 UTC" firstStartedPulling="2025-10-14 06:52:12.522102125 +0000 UTC m=+280.433185931" lastFinishedPulling="2025-10-14 06:52:15.070565046 +0000 UTC m=+282.981648852" observedRunningTime="2025-10-14 06:52:15.61615411 +0000 UTC m=+283.527237926" watchObservedRunningTime="2025-10-14 06:52:15.616954794 +0000 UTC m=+283.528038600" Oct 14 06:52:16 crc kubenswrapper[5058]: I1014 06:52:16.574711 5058 generic.go:334] "Generic (PLEG): container finished" podID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerID="081c4d93f69a27d6638d6667c726fc4bcff4751ff657ba3e7ad320df4b421bec" exitCode=0 Oct 14 06:52:16 crc kubenswrapper[5058]: I1014 06:52:16.574865 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgh6z" event={"ID":"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe","Type":"ContainerDied","Data":"081c4d93f69a27d6638d6667c726fc4bcff4751ff657ba3e7ad320df4b421bec"} Oct 14 06:52:16 crc kubenswrapper[5058]: I1014 06:52:16.580534 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7m7z" event={"ID":"04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c","Type":"ContainerStarted","Data":"ab2939766e935d345f8935bd0cb2943e0d3af2701b0344db37c41981f579471c"} Oct 14 06:52:16 crc kubenswrapper[5058]: I1014 06:52:16.615059 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c7m7z" podStartSLOduration=2.091624223 podStartE2EDuration="3.615044737s" podCreationTimestamp="2025-10-14 06:52:13 +0000 UTC" firstStartedPulling="2025-10-14 06:52:14.546210415 +0000 UTC m=+282.457294211" lastFinishedPulling="2025-10-14 06:52:16.069630919 +0000 UTC m=+283.980714725" observedRunningTime="2025-10-14 06:52:16.612861611 +0000 UTC m=+284.523945477" watchObservedRunningTime="2025-10-14 06:52:16.615044737 +0000 UTC m=+284.526128543" Oct 14 06:52:17 crc kubenswrapper[5058]: I1014 06:52:17.589569 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgh6z" event={"ID":"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe","Type":"ContainerStarted","Data":"7574808b8fb0172a1433f15e27139e95d3f7d33337f8c7120c66aa402d7e7665"} Oct 14 06:52:17 crc kubenswrapper[5058]: I1014 06:52:17.612706 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgh6z" podStartSLOduration=3.126302002 podStartE2EDuration="4.612684416s" podCreationTimestamp="2025-10-14 06:52:13 +0000 UTC" firstStartedPulling="2025-10-14 06:52:15.561944311 +0000 UTC m=+283.473028117" lastFinishedPulling="2025-10-14 06:52:17.048326725 +0000 UTC m=+284.959410531" observedRunningTime="2025-10-14 06:52:17.610991355 +0000 UTC m=+285.522075231" watchObservedRunningTime="2025-10-14 06:52:17.612684416 +0000 UTC m=+285.523768222" Oct 14 06:52:21 crc kubenswrapper[5058]: I1014 06:52:21.675230 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:21 crc kubenswrapper[5058]: I1014 06:52:21.676926 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:21 crc kubenswrapper[5058]: I1014 06:52:21.730617 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:21 crc kubenswrapper[5058]: I1014 06:52:21.884203 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:21 crc kubenswrapper[5058]: I1014 06:52:21.884298 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:21 crc kubenswrapper[5058]: I1014 06:52:21.922419 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:22 crc kubenswrapper[5058]: I1014 06:52:22.679171 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zmx7f" Oct 14 06:52:22 crc kubenswrapper[5058]: I1014 06:52:22.684197 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 06:52:24 crc kubenswrapper[5058]: I1014 06:52:24.077758 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:24 crc kubenswrapper[5058]: I1014 06:52:24.079306 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:24 crc kubenswrapper[5058]: I1014 06:52:24.147516 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:24 crc kubenswrapper[5058]: I1014 06:52:24.290499 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:24 crc kubenswrapper[5058]: I1014 06:52:24.290580 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:24 crc kubenswrapper[5058]: I1014 06:52:24.361549 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:52:24 crc kubenswrapper[5058]: I1014 06:52:24.698644 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c7m7z" Oct 14 06:52:24 crc kubenswrapper[5058]: I1014 06:52:24.700541 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgh6z" Oct 14 06:53:03 crc kubenswrapper[5058]: I1014 06:53:03.656353 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:53:03 crc kubenswrapper[5058]: I1014 06:53:03.657107 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:53:33 crc kubenswrapper[5058]: I1014 06:53:33.656130 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:53:33 crc kubenswrapper[5058]: I1014 06:53:33.656727 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:54:03 crc kubenswrapper[5058]: I1014 06:54:03.655999 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:54:03 crc kubenswrapper[5058]: I1014 06:54:03.656945 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:54:03 crc kubenswrapper[5058]: I1014 06:54:03.657052 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:54:03 crc kubenswrapper[5058]: I1014 06:54:03.658310 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4c0bd0acf196b268ef3cd7ffdddedde7202a814ba749856184d02a1c1629533"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 06:54:03 crc kubenswrapper[5058]: I1014 06:54:03.658468 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://b4c0bd0acf196b268ef3cd7ffdddedde7202a814ba749856184d02a1c1629533" gracePeriod=600 Oct 14 06:54:04 crc kubenswrapper[5058]: I1014 06:54:04.341198 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="b4c0bd0acf196b268ef3cd7ffdddedde7202a814ba749856184d02a1c1629533" exitCode=0 Oct 14 06:54:04 crc kubenswrapper[5058]: I1014 06:54:04.341226 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"b4c0bd0acf196b268ef3cd7ffdddedde7202a814ba749856184d02a1c1629533"} Oct 14 06:54:04 crc kubenswrapper[5058]: I1014 06:54:04.342142 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"03471caec5a270f474cd1f8380fe5ceb96fa9df94e3644b8d0ac5d170f514368"} Oct 14 06:54:04 crc kubenswrapper[5058]: I1014 06:54:04.342175 5058 scope.go:117] "RemoveContainer" containerID="82df5754476794bf51be5faadcd06317808ca016831c86026b0d68f28a94f5dc" Oct 14 06:54:32 crc kubenswrapper[5058]: I1014 06:54:32.961042 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xpf6g"] Oct 14 06:54:32 crc kubenswrapper[5058]: I1014 06:54:32.962407 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:32 crc kubenswrapper[5058]: I1014 06:54:32.987536 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xpf6g"] Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.079132 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73110e45-23b5-4282-acda-4e3f4165e300-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.079183 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73110e45-23b5-4282-acda-4e3f4165e300-registry-tls\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.079224 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73110e45-23b5-4282-acda-4e3f4165e300-trusted-ca\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.079250 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73110e45-23b5-4282-acda-4e3f4165e300-bound-sa-token\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.079283 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73110e45-23b5-4282-acda-4e3f4165e300-registry-certificates\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.079302 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqdh9\" (UniqueName: \"kubernetes.io/projected/73110e45-23b5-4282-acda-4e3f4165e300-kube-api-access-fqdh9\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.079322 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73110e45-23b5-4282-acda-4e3f4165e300-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.079350 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.110498 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.180286 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73110e45-23b5-4282-acda-4e3f4165e300-registry-certificates\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.180653 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqdh9\" (UniqueName: \"kubernetes.io/projected/73110e45-23b5-4282-acda-4e3f4165e300-kube-api-access-fqdh9\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.181451 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73110e45-23b5-4282-acda-4e3f4165e300-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.181699 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73110e45-23b5-4282-acda-4e3f4165e300-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.181950 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73110e45-23b5-4282-acda-4e3f4165e300-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.182194 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73110e45-23b5-4282-acda-4e3f4165e300-registry-tls\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.182925 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73110e45-23b5-4282-acda-4e3f4165e300-trusted-ca\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.183142 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73110e45-23b5-4282-acda-4e3f4165e300-bound-sa-token\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.182552 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73110e45-23b5-4282-acda-4e3f4165e300-registry-certificates\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.185333 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73110e45-23b5-4282-acda-4e3f4165e300-trusted-ca\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.191200 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73110e45-23b5-4282-acda-4e3f4165e300-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.191404 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73110e45-23b5-4282-acda-4e3f4165e300-registry-tls\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.198361 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73110e45-23b5-4282-acda-4e3f4165e300-bound-sa-token\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.208784 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqdh9\" (UniqueName: \"kubernetes.io/projected/73110e45-23b5-4282-acda-4e3f4165e300-kube-api-access-fqdh9\") pod \"image-registry-66df7c8f76-xpf6g\" (UID: \"73110e45-23b5-4282-acda-4e3f4165e300\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.281029 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:33 crc kubenswrapper[5058]: I1014 06:54:33.553497 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xpf6g"] Oct 14 06:54:34 crc kubenswrapper[5058]: I1014 06:54:34.544232 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" event={"ID":"73110e45-23b5-4282-acda-4e3f4165e300","Type":"ContainerStarted","Data":"045f987d40818915cfc5a30ef374c0a7db0a1ded9ef0308b6851524d9c0b0568"} Oct 14 06:54:34 crc kubenswrapper[5058]: I1014 06:54:34.544550 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:34 crc kubenswrapper[5058]: I1014 06:54:34.544562 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" event={"ID":"73110e45-23b5-4282-acda-4e3f4165e300","Type":"ContainerStarted","Data":"d986f7e36aa002950951b25fe5adf869160b2787019d351ec6393e6df8ecf179"} Oct 14 06:54:34 crc kubenswrapper[5058]: I1014 06:54:34.564774 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" podStartSLOduration=2.564756602 podStartE2EDuration="2.564756602s" podCreationTimestamp="2025-10-14 06:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:54:34.563169315 +0000 UTC m=+422.474253131" watchObservedRunningTime="2025-10-14 06:54:34.564756602 +0000 UTC m=+422.475840408" Oct 14 06:54:53 crc kubenswrapper[5058]: I1014 06:54:53.287700 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xpf6g" Oct 14 06:54:53 crc kubenswrapper[5058]: I1014 06:54:53.349721 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-khhmg"] Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.393060 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" podUID="35c3118a-120e-45b8-80fc-1c939c4a7e85" containerName="registry" containerID="cri-o://2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09" gracePeriod=30 Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.825099 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.890870 5058 generic.go:334] "Generic (PLEG): container finished" podID="35c3118a-120e-45b8-80fc-1c939c4a7e85" containerID="2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09" exitCode=0 Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.890919 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" event={"ID":"35c3118a-120e-45b8-80fc-1c939c4a7e85","Type":"ContainerDied","Data":"2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09"} Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.890952 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" event={"ID":"35c3118a-120e-45b8-80fc-1c939c4a7e85","Type":"ContainerDied","Data":"36e58f1fb91f078f69b47b4787ba1b4e0f23c0444f1f1c75eb240b89b85450f3"} Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.890978 5058 scope.go:117] "RemoveContainer" containerID="2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.891095 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-khhmg" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.910577 5058 scope.go:117] "RemoveContainer" containerID="2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09" Oct 14 06:55:18 crc kubenswrapper[5058]: E1014 06:55:18.911072 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09\": container with ID starting with 2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09 not found: ID does not exist" containerID="2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.911142 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09"} err="failed to get container status \"2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09\": rpc error: code = NotFound desc = could not find container \"2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09\": container with ID starting with 2b1210fbff8d3e899e4456224a346d7f580fa8b271aba3d58fe4f5da979f7a09 not found: ID does not exist" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.975032 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35c3118a-120e-45b8-80fc-1c939c4a7e85-installation-pull-secrets\") pod \"35c3118a-120e-45b8-80fc-1c939c4a7e85\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.975101 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35c3118a-120e-45b8-80fc-1c939c4a7e85-ca-trust-extracted\") pod \"35c3118a-120e-45b8-80fc-1c939c4a7e85\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.975126 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89rcw\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-kube-api-access-89rcw\") pod \"35c3118a-120e-45b8-80fc-1c939c4a7e85\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.975206 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-bound-sa-token\") pod \"35c3118a-120e-45b8-80fc-1c939c4a7e85\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.975239 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-certificates\") pod \"35c3118a-120e-45b8-80fc-1c939c4a7e85\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.976701 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "35c3118a-120e-45b8-80fc-1c939c4a7e85" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.976909 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"35c3118a-120e-45b8-80fc-1c939c4a7e85\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.977541 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-trusted-ca\") pod \"35c3118a-120e-45b8-80fc-1c939c4a7e85\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.984579 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "35c3118a-120e-45b8-80fc-1c939c4a7e85" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.977690 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-tls\") pod \"35c3118a-120e-45b8-80fc-1c939c4a7e85\" (UID: \"35c3118a-120e-45b8-80fc-1c939c4a7e85\") " Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.985359 5058 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.985412 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35c3118a-120e-45b8-80fc-1c939c4a7e85-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.985724 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c3118a-120e-45b8-80fc-1c939c4a7e85-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "35c3118a-120e-45b8-80fc-1c939c4a7e85" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.985943 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-kube-api-access-89rcw" (OuterVolumeSpecName: "kube-api-access-89rcw") pod "35c3118a-120e-45b8-80fc-1c939c4a7e85" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85"). InnerVolumeSpecName "kube-api-access-89rcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.990073 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "35c3118a-120e-45b8-80fc-1c939c4a7e85" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:55:18 crc kubenswrapper[5058]: I1014 06:55:18.991574 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "35c3118a-120e-45b8-80fc-1c939c4a7e85" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 06:55:19 crc kubenswrapper[5058]: I1014 06:55:19.001928 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "35c3118a-120e-45b8-80fc-1c939c4a7e85" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:55:19 crc kubenswrapper[5058]: I1014 06:55:19.027609 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c3118a-120e-45b8-80fc-1c939c4a7e85-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "35c3118a-120e-45b8-80fc-1c939c4a7e85" (UID: "35c3118a-120e-45b8-80fc-1c939c4a7e85"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 06:55:19 crc kubenswrapper[5058]: I1014 06:55:19.087317 5058 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 14 06:55:19 crc kubenswrapper[5058]: I1014 06:55:19.087359 5058 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35c3118a-120e-45b8-80fc-1c939c4a7e85-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 06:55:19 crc kubenswrapper[5058]: I1014 06:55:19.087402 5058 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35c3118a-120e-45b8-80fc-1c939c4a7e85-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 14 06:55:19 crc kubenswrapper[5058]: I1014 06:55:19.087416 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89rcw\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-kube-api-access-89rcw\") on node \"crc\" DevicePath \"\"" Oct 14 06:55:19 crc kubenswrapper[5058]: I1014 06:55:19.087431 5058 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35c3118a-120e-45b8-80fc-1c939c4a7e85-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 06:55:19 crc kubenswrapper[5058]: I1014 06:55:19.238943 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-khhmg"] Oct 14 06:55:19 crc kubenswrapper[5058]: I1014 06:55:19.248455 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-khhmg"] Oct 14 06:55:20 crc kubenswrapper[5058]: I1014 06:55:20.802351 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c3118a-120e-45b8-80fc-1c939c4a7e85" path="/var/lib/kubelet/pods/35c3118a-120e-45b8-80fc-1c939c4a7e85/volumes" Oct 14 06:56:03 crc kubenswrapper[5058]: I1014 06:56:03.656457 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:56:03 crc kubenswrapper[5058]: I1014 06:56:03.657162 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:56:33 crc kubenswrapper[5058]: I1014 06:56:33.656640 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:56:33 crc kubenswrapper[5058]: I1014 06:56:33.657525 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:57:03 crc kubenswrapper[5058]: I1014 06:57:03.657685 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:57:03 crc kubenswrapper[5058]: I1014 06:57:03.658753 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:57:03 crc kubenswrapper[5058]: I1014 06:57:03.658879 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 06:57:03 crc kubenswrapper[5058]: I1014 06:57:03.659854 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03471caec5a270f474cd1f8380fe5ceb96fa9df94e3644b8d0ac5d170f514368"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 06:57:03 crc kubenswrapper[5058]: I1014 06:57:03.659968 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://03471caec5a270f474cd1f8380fe5ceb96fa9df94e3644b8d0ac5d170f514368" gracePeriod=600 Oct 14 06:57:04 crc kubenswrapper[5058]: I1014 06:57:04.595183 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="03471caec5a270f474cd1f8380fe5ceb96fa9df94e3644b8d0ac5d170f514368" exitCode=0 Oct 14 06:57:04 crc kubenswrapper[5058]: I1014 06:57:04.595245 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"03471caec5a270f474cd1f8380fe5ceb96fa9df94e3644b8d0ac5d170f514368"} Oct 14 06:57:04 crc kubenswrapper[5058]: I1014 06:57:04.595941 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"e300c99e9aa5760363433f01fb5f81849230f77f3ce1c9de5aa4bba11aefb08d"} Oct 14 06:57:04 crc kubenswrapper[5058]: I1014 06:57:04.595977 5058 scope.go:117] "RemoveContainer" containerID="b4c0bd0acf196b268ef3cd7ffdddedde7202a814ba749856184d02a1c1629533" Oct 14 06:59:03 crc kubenswrapper[5058]: I1014 06:59:03.656880 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:59:03 crc kubenswrapper[5058]: I1014 06:59:03.657561 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:59:13 crc kubenswrapper[5058]: I1014 06:59:13.866515 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fw5vr"] Oct 14 06:59:13 crc kubenswrapper[5058]: I1014 06:59:13.867632 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovn-controller" containerID="cri-o://4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670" gracePeriod=30 Oct 14 06:59:13 crc kubenswrapper[5058]: I1014 06:59:13.868100 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="sbdb" containerID="cri-o://028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c" gracePeriod=30 Oct 14 06:59:13 crc kubenswrapper[5058]: I1014 06:59:13.868169 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="nbdb" containerID="cri-o://172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67" gracePeriod=30 Oct 14 06:59:13 crc kubenswrapper[5058]: I1014 06:59:13.868221 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="northd" containerID="cri-o://a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035" gracePeriod=30 Oct 14 06:59:13 crc kubenswrapper[5058]: I1014 06:59:13.868267 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029" gracePeriod=30 Oct 14 06:59:13 crc kubenswrapper[5058]: I1014 06:59:13.868311 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kube-rbac-proxy-node" containerID="cri-o://dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45" gracePeriod=30 Oct 14 06:59:13 crc kubenswrapper[5058]: I1014 06:59:13.868358 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovn-acl-logging" containerID="cri-o://ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc" gracePeriod=30 Oct 14 06:59:13 crc kubenswrapper[5058]: I1014 06:59:13.936171 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" containerID="cri-o://7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2" gracePeriod=30 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.214851 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/3.log" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.218560 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovn-acl-logging/0.log" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.219433 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovn-controller/0.log" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.220064 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.279975 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xnc59"] Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280269 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c3118a-120e-45b8-80fc-1c939c4a7e85" containerName="registry" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280290 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c3118a-120e-45b8-80fc-1c939c4a7e85" containerName="registry" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280302 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovn-acl-logging" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280311 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovn-acl-logging" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280320 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="northd" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280330 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="northd" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280339 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280350 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280359 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovn-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280368 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovn-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280384 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280392 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280407 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="sbdb" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280416 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="sbdb" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280427 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280436 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280445 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280453 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280466 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kubecfg-setup" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280476 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kubecfg-setup" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280488 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kube-rbac-proxy-node" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280496 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kube-rbac-proxy-node" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280511 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280519 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280532 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280540 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.280550 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="nbdb" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280558 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="nbdb" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280688 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovn-acl-logging" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280700 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280709 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="sbdb" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280722 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280732 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280740 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="kube-rbac-proxy-node" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280749 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280758 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c3118a-120e-45b8-80fc-1c939c4a7e85" containerName="registry" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280769 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="nbdb" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280781 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovn-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.280791 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="northd" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.281050 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.281068 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58308f56-cccd-4c52-89af-c23806a4769e" containerName="ovnkube-controller" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.283744 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.362779 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-slash\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.362890 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-systemd\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.362907 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-etc-openvswitch\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.362911 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-slash" (OuterVolumeSpecName: "host-slash") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.362994 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.362929 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-node-log\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363047 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-systemd-units\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363065 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-bin\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363099 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363100 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-node-log" (OuterVolumeSpecName: "node-log") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363133 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-env-overrides\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363152 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-ovn-kubernetes\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363185 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363309 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363665 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.363738 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/58308f56-cccd-4c52-89af-c23806a4769e-ovn-node-metrics-cert\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364214 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-ovn\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364243 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-log-socket\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364270 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-config\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364297 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-kubelet\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364291 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364326 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-openvswitch\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364395 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364428 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-log-socket" (OuterVolumeSpecName: "log-socket") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364472 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-var-lib-openvswitch\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364538 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364618 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364736 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-script-lib\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364761 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-netns\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364786 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-netd\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364829 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlkw2\" (UniqueName: \"kubernetes.io/projected/58308f56-cccd-4c52-89af-c23806a4769e-kube-api-access-hlkw2\") pod \"58308f56-cccd-4c52-89af-c23806a4769e\" (UID: \"58308f56-cccd-4c52-89af-c23806a4769e\") " Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365045 5058 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-slash\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365060 5058 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365073 5058 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-node-log\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365085 5058 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365097 5058 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365108 5058 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365119 5058 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365132 5058 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365145 5058 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-log-socket\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365159 5058 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365171 5058 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364628 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365387 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364651 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.364680 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365367 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.365438 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.368663 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58308f56-cccd-4c52-89af-c23806a4769e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.368984 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58308f56-cccd-4c52-89af-c23806a4769e-kube-api-access-hlkw2" (OuterVolumeSpecName: "kube-api-access-hlkw2") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "kube-api-access-hlkw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.378543 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "58308f56-cccd-4c52-89af-c23806a4769e" (UID: "58308f56-cccd-4c52-89af-c23806a4769e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.466849 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8b17cd49-9ba9-4e4d-af25-028d608ae48f-ovn-node-metrics-cert\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.466921 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-run-ovn-kubernetes\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.466944 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-cni-bin\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467170 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-systemd-units\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467237 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-run-netns\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467348 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-run-systemd\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467412 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-run-openvswitch\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467472 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-var-lib-openvswitch\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467512 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8b17cd49-9ba9-4e4d-af25-028d608ae48f-env-overrides\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467575 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-kubelet\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467665 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-log-socket\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467712 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-slash\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467862 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-run-ovn\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467924 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8b17cd49-9ba9-4e4d-af25-028d608ae48f-ovnkube-script-lib\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.467985 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-cni-netd\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468124 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-etc-openvswitch\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468166 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzqgw\" (UniqueName: \"kubernetes.io/projected/8b17cd49-9ba9-4e4d-af25-028d608ae48f-kube-api-access-pzqgw\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468212 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468259 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-node-log\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468290 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8b17cd49-9ba9-4e4d-af25-028d608ae48f-ovnkube-config\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468365 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/58308f56-cccd-4c52-89af-c23806a4769e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468383 5058 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468397 5058 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468414 5058 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468427 5058 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/58308f56-cccd-4c52-89af-c23806a4769e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468439 5058 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468451 5058 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468481 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlkw2\" (UniqueName: \"kubernetes.io/projected/58308f56-cccd-4c52-89af-c23806a4769e-kube-api-access-hlkw2\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.468495 5058 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/58308f56-cccd-4c52-89af-c23806a4769e-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.482357 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/2.log" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.483031 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/1.log" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.483079 5058 generic.go:334] "Generic (PLEG): container finished" podID="1288bab5-7372-4acc-963c-6232b27a7975" containerID="eb987a24e03365652527e60dceb65eb0041958bd107363d333033b2b6cb45081" exitCode=2 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.483143 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csl4q" event={"ID":"1288bab5-7372-4acc-963c-6232b27a7975","Type":"ContainerDied","Data":"eb987a24e03365652527e60dceb65eb0041958bd107363d333033b2b6cb45081"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.483446 5058 scope.go:117] "RemoveContainer" containerID="a45a6cfaf8807bdce38b952b18ffd1346209466655e75a8d13b85b962b2d7948" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.483782 5058 scope.go:117] "RemoveContainer" containerID="eb987a24e03365652527e60dceb65eb0041958bd107363d333033b2b6cb45081" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.484049 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-csl4q_openshift-multus(1288bab5-7372-4acc-963c-6232b27a7975)\"" pod="openshift-multus/multus-csl4q" podUID="1288bab5-7372-4acc-963c-6232b27a7975" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.487120 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovnkube-controller/3.log" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.490358 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovn-acl-logging/0.log" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491080 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fw5vr_58308f56-cccd-4c52-89af-c23806a4769e/ovn-controller/0.log" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491549 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2" exitCode=0 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491574 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c" exitCode=0 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491585 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67" exitCode=0 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491595 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035" exitCode=0 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491605 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029" exitCode=0 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491616 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45" exitCode=0 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491625 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc" exitCode=143 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491633 5058 generic.go:334] "Generic (PLEG): container finished" podID="58308f56-cccd-4c52-89af-c23806a4769e" containerID="4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670" exitCode=143 Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491651 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491671 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491685 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491697 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491710 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491721 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491743 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491755 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491763 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491770 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491779 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491786 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491813 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491821 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491829 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491835 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491845 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491857 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491867 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491875 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491882 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491890 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491897 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491720 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.491905 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492078 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492109 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492126 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492186 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492220 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492236 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492248 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492259 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492270 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492281 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492292 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492303 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492314 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492324 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492340 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fw5vr" event={"ID":"58308f56-cccd-4c52-89af-c23806a4769e","Type":"ContainerDied","Data":"9d9ed2a8de264e30c7f219ba33ede9b9cd160967a556753f2f55f824f7e6c777"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492357 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492370 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492381 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492392 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492402 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492412 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492424 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492435 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492445 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.492456 5058 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8"} Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.521728 5058 scope.go:117] "RemoveContainer" containerID="7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.537966 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fw5vr"] Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.545694 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fw5vr"] Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.550240 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.569926 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8b17cd49-9ba9-4e4d-af25-028d608ae48f-ovn-node-metrics-cert\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570008 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-run-ovn-kubernetes\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570044 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-cni-bin\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570095 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-systemd-units\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570126 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-run-netns\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570155 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-run-systemd\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570183 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-run-openvswitch\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570222 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-var-lib-openvswitch\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570256 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8b17cd49-9ba9-4e4d-af25-028d608ae48f-env-overrides\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570290 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-kubelet\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570328 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-log-socket\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570368 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-slash\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570412 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-run-ovn\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570442 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8b17cd49-9ba9-4e4d-af25-028d608ae48f-ovnkube-script-lib\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570474 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-cni-netd\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570658 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-etc-openvswitch\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570699 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzqgw\" (UniqueName: \"kubernetes.io/projected/8b17cd49-9ba9-4e4d-af25-028d608ae48f-kube-api-access-pzqgw\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570735 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570769 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-node-log\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.570846 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8b17cd49-9ba9-4e4d-af25-028d608ae48f-ovnkube-config\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571152 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-kubelet\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571205 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-run-systemd\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571235 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-run-openvswitch\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571246 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-run-netns\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571307 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-run-ovn-kubernetes\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571341 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-cni-bin\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571362 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-cni-netd\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571376 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-systemd-units\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571418 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-var-lib-openvswitch\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571502 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-slash\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571466 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-log-socket\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571557 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-run-ovn\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571591 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571649 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-node-log\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571671 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8b17cd49-9ba9-4e4d-af25-028d608ae48f-env-overrides\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.571693 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b17cd49-9ba9-4e4d-af25-028d608ae48f-etc-openvswitch\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.572656 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8b17cd49-9ba9-4e4d-af25-028d608ae48f-ovnkube-script-lib\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.574249 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8b17cd49-9ba9-4e4d-af25-028d608ae48f-ovnkube-config\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.575541 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8b17cd49-9ba9-4e4d-af25-028d608ae48f-ovn-node-metrics-cert\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.576907 5058 scope.go:117] "RemoveContainer" containerID="028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.598988 5058 scope.go:117] "RemoveContainer" containerID="172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.601262 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzqgw\" (UniqueName: \"kubernetes.io/projected/8b17cd49-9ba9-4e4d-af25-028d608ae48f-kube-api-access-pzqgw\") pod \"ovnkube-node-xnc59\" (UID: \"8b17cd49-9ba9-4e4d-af25-028d608ae48f\") " pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.618140 5058 scope.go:117] "RemoveContainer" containerID="a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.640907 5058 scope.go:117] "RemoveContainer" containerID="b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.661986 5058 scope.go:117] "RemoveContainer" containerID="dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.682666 5058 scope.go:117] "RemoveContainer" containerID="ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.702517 5058 scope.go:117] "RemoveContainer" containerID="4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.724412 5058 scope.go:117] "RemoveContainer" containerID="040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.748113 5058 scope.go:117] "RemoveContainer" containerID="7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.748658 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2\": container with ID starting with 7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2 not found: ID does not exist" containerID="7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.748717 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} err="failed to get container status \"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2\": rpc error: code = NotFound desc = could not find container \"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2\": container with ID starting with 7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.748756 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.749500 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\": container with ID starting with 33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697 not found: ID does not exist" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.749557 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} err="failed to get container status \"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\": rpc error: code = NotFound desc = could not find container \"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\": container with ID starting with 33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.749593 5058 scope.go:117] "RemoveContainer" containerID="028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.750160 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\": container with ID starting with 028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c not found: ID does not exist" containerID="028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.750196 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} err="failed to get container status \"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\": rpc error: code = NotFound desc = could not find container \"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\": container with ID starting with 028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.750223 5058 scope.go:117] "RemoveContainer" containerID="172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.750895 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\": container with ID starting with 172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67 not found: ID does not exist" containerID="172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.750939 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} err="failed to get container status \"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\": rpc error: code = NotFound desc = could not find container \"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\": container with ID starting with 172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.750968 5058 scope.go:117] "RemoveContainer" containerID="a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.752010 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\": container with ID starting with a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035 not found: ID does not exist" containerID="a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.752052 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} err="failed to get container status \"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\": rpc error: code = NotFound desc = could not find container \"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\": container with ID starting with a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.752070 5058 scope.go:117] "RemoveContainer" containerID="b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.752701 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\": container with ID starting with b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029 not found: ID does not exist" containerID="b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.752759 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} err="failed to get container status \"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\": rpc error: code = NotFound desc = could not find container \"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\": container with ID starting with b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.752829 5058 scope.go:117] "RemoveContainer" containerID="dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.753412 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\": container with ID starting with dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45 not found: ID does not exist" containerID="dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.753455 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} err="failed to get container status \"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\": rpc error: code = NotFound desc = could not find container \"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\": container with ID starting with dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.753485 5058 scope.go:117] "RemoveContainer" containerID="ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.754479 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\": container with ID starting with ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc not found: ID does not exist" containerID="ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.754521 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} err="failed to get container status \"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\": rpc error: code = NotFound desc = could not find container \"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\": container with ID starting with ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.754551 5058 scope.go:117] "RemoveContainer" containerID="4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.755162 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\": container with ID starting with 4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670 not found: ID does not exist" containerID="4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.755199 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} err="failed to get container status \"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\": rpc error: code = NotFound desc = could not find container \"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\": container with ID starting with 4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.755223 5058 scope.go:117] "RemoveContainer" containerID="040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8" Oct 14 06:59:14 crc kubenswrapper[5058]: E1014 06:59:14.755968 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\": container with ID starting with 040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8 not found: ID does not exist" containerID="040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.756000 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8"} err="failed to get container status \"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\": rpc error: code = NotFound desc = could not find container \"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\": container with ID starting with 040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.756025 5058 scope.go:117] "RemoveContainer" containerID="7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.757768 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} err="failed to get container status \"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2\": rpc error: code = NotFound desc = could not find container \"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2\": container with ID starting with 7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.757817 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.758293 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} err="failed to get container status \"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\": rpc error: code = NotFound desc = could not find container \"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\": container with ID starting with 33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.758355 5058 scope.go:117] "RemoveContainer" containerID="028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.758870 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} err="failed to get container status \"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\": rpc error: code = NotFound desc = could not find container \"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\": container with ID starting with 028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.758912 5058 scope.go:117] "RemoveContainer" containerID="172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.759274 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} err="failed to get container status \"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\": rpc error: code = NotFound desc = could not find container \"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\": container with ID starting with 172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.759303 5058 scope.go:117] "RemoveContainer" containerID="a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.759603 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} err="failed to get container status \"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\": rpc error: code = NotFound desc = could not find container \"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\": container with ID starting with a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.759626 5058 scope.go:117] "RemoveContainer" containerID="b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.759938 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} err="failed to get container status \"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\": rpc error: code = NotFound desc = could not find container \"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\": container with ID starting with b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.759961 5058 scope.go:117] "RemoveContainer" containerID="dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.760244 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} err="failed to get container status \"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\": rpc error: code = NotFound desc = could not find container \"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\": container with ID starting with dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.760267 5058 scope.go:117] "RemoveContainer" containerID="ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.760650 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} err="failed to get container status \"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\": rpc error: code = NotFound desc = could not find container \"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\": container with ID starting with ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.760705 5058 scope.go:117] "RemoveContainer" containerID="4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.761190 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} err="failed to get container status \"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\": rpc error: code = NotFound desc = could not find container \"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\": container with ID starting with 4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.761220 5058 scope.go:117] "RemoveContainer" containerID="040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.761576 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8"} err="failed to get container status \"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\": rpc error: code = NotFound desc = could not find container \"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\": container with ID starting with 040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.761610 5058 scope.go:117] "RemoveContainer" containerID="7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.762250 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} err="failed to get container status \"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2\": rpc error: code = NotFound desc = could not find container \"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2\": container with ID starting with 7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.762277 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.762670 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} err="failed to get container status \"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\": rpc error: code = NotFound desc = could not find container \"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\": container with ID starting with 33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.762710 5058 scope.go:117] "RemoveContainer" containerID="028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.763102 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} err="failed to get container status \"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\": rpc error: code = NotFound desc = could not find container \"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\": container with ID starting with 028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.763126 5058 scope.go:117] "RemoveContainer" containerID="172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.763464 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} err="failed to get container status \"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\": rpc error: code = NotFound desc = could not find container \"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\": container with ID starting with 172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.763515 5058 scope.go:117] "RemoveContainer" containerID="a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.763863 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} err="failed to get container status \"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\": rpc error: code = NotFound desc = could not find container \"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\": container with ID starting with a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.763887 5058 scope.go:117] "RemoveContainer" containerID="b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.764207 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} err="failed to get container status \"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\": rpc error: code = NotFound desc = could not find container \"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\": container with ID starting with b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.764240 5058 scope.go:117] "RemoveContainer" containerID="dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.764588 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} err="failed to get container status \"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\": rpc error: code = NotFound desc = could not find container \"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\": container with ID starting with dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.764616 5058 scope.go:117] "RemoveContainer" containerID="ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.765087 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} err="failed to get container status \"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\": rpc error: code = NotFound desc = could not find container \"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\": container with ID starting with ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.765191 5058 scope.go:117] "RemoveContainer" containerID="4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.765633 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} err="failed to get container status \"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\": rpc error: code = NotFound desc = could not find container \"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\": container with ID starting with 4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.765658 5058 scope.go:117] "RemoveContainer" containerID="040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.765929 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8"} err="failed to get container status \"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\": rpc error: code = NotFound desc = could not find container \"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\": container with ID starting with 040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.765955 5058 scope.go:117] "RemoveContainer" containerID="7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.766276 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2"} err="failed to get container status \"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2\": rpc error: code = NotFound desc = could not find container \"7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2\": container with ID starting with 7974fd20bf19cc8ffbfb818f0919a115d236afdf513257039d7072528f386dd2 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.766319 5058 scope.go:117] "RemoveContainer" containerID="33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.766738 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697"} err="failed to get container status \"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\": rpc error: code = NotFound desc = could not find container \"33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697\": container with ID starting with 33a897b94edf965089faf6d1f8a4462f6a3d67f8ef0ed0fcdf9e2d6ee1663697 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.766767 5058 scope.go:117] "RemoveContainer" containerID="028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.767064 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c"} err="failed to get container status \"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\": rpc error: code = NotFound desc = could not find container \"028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c\": container with ID starting with 028b3968f69df35fb1676d10d2823b2f214716e3484ad71898c8599018211a7c not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.767089 5058 scope.go:117] "RemoveContainer" containerID="172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.767455 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67"} err="failed to get container status \"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\": rpc error: code = NotFound desc = could not find container \"172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67\": container with ID starting with 172a07e62ea4818e328c95587787fb2130fc15e02a7b8da32383973f814cfa67 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.767510 5058 scope.go:117] "RemoveContainer" containerID="a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.767934 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035"} err="failed to get container status \"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\": rpc error: code = NotFound desc = could not find container \"a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035\": container with ID starting with a5405a466da49c2689dfe66a3b6678ecd27ec7a42ef5510bc330b273c639f035 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.767960 5058 scope.go:117] "RemoveContainer" containerID="b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.768333 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029"} err="failed to get container status \"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\": rpc error: code = NotFound desc = could not find container \"b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029\": container with ID starting with b55c6ba905bc69afafdb3871354b3d3a7dfec43f46d9556ef21597bdeb878029 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.768374 5058 scope.go:117] "RemoveContainer" containerID="dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.768837 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45"} err="failed to get container status \"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\": rpc error: code = NotFound desc = could not find container \"dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45\": container with ID starting with dbd581ae264f3d7f7914bba005aaa4cd5dbc60b812e287a879a64a266936bc45 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.768876 5058 scope.go:117] "RemoveContainer" containerID="ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.769308 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc"} err="failed to get container status \"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\": rpc error: code = NotFound desc = could not find container \"ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc\": container with ID starting with ee6ecd92da4b8a1b897ded59825f6da4ce2b94c76ed6e974055c3503793e68cc not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.769345 5058 scope.go:117] "RemoveContainer" containerID="4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.769679 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670"} err="failed to get container status \"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\": rpc error: code = NotFound desc = could not find container \"4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670\": container with ID starting with 4c2010ad851018f789d7eaed29b4e92220b55ef3428c4409706dbf5b028b2670 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.769713 5058 scope.go:117] "RemoveContainer" containerID="040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.770110 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8"} err="failed to get container status \"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\": rpc error: code = NotFound desc = could not find container \"040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8\": container with ID starting with 040779952bec889c24e263196d739cf3d57505ae243bb6a87ea3bfe2685b20e8 not found: ID does not exist" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.798104 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58308f56-cccd-4c52-89af-c23806a4769e" path="/var/lib/kubelet/pods/58308f56-cccd-4c52-89af-c23806a4769e/volumes" Oct 14 06:59:14 crc kubenswrapper[5058]: I1014 06:59:14.904182 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:15 crc kubenswrapper[5058]: I1014 06:59:15.503641 5058 generic.go:334] "Generic (PLEG): container finished" podID="8b17cd49-9ba9-4e4d-af25-028d608ae48f" containerID="f5a2d0f265ee6c9d16f4d790490755624e145233c394b73c547a4f89031a5a61" exitCode=0 Oct 14 06:59:15 crc kubenswrapper[5058]: I1014 06:59:15.503725 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerDied","Data":"f5a2d0f265ee6c9d16f4d790490755624e145233c394b73c547a4f89031a5a61"} Oct 14 06:59:15 crc kubenswrapper[5058]: I1014 06:59:15.504308 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerStarted","Data":"6eab94bf7334762a0739531cc684bbeb8caa9519f8466331a99a5828b0844d09"} Oct 14 06:59:15 crc kubenswrapper[5058]: I1014 06:59:15.510368 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/2.log" Oct 14 06:59:16 crc kubenswrapper[5058]: I1014 06:59:16.526843 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerStarted","Data":"857ef5c3fef6ceeba1e1ea85bb2c519798b434305c1268ddb25313d479b4846e"} Oct 14 06:59:16 crc kubenswrapper[5058]: I1014 06:59:16.527765 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerStarted","Data":"e79391f60b3274ea3ba35e381a903cc40718abcebee7c756b347b179cd4f513f"} Oct 14 06:59:16 crc kubenswrapper[5058]: I1014 06:59:16.527779 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerStarted","Data":"dfde33ec7475502fa996bd47ba1ae406a9838113e230d900ac4c0b438e15f97f"} Oct 14 06:59:16 crc kubenswrapper[5058]: I1014 06:59:16.527790 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerStarted","Data":"3391a49fe7c785440e3e75056c597a7dbe75acae0fcba25330781381f9eb2dfc"} Oct 14 06:59:16 crc kubenswrapper[5058]: I1014 06:59:16.527833 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerStarted","Data":"24893ce51330c464fc45214971fc60d6c1d568ab6ff4cddcdba680c14088d72b"} Oct 14 06:59:16 crc kubenswrapper[5058]: I1014 06:59:16.527845 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerStarted","Data":"69b308f5ddf65cd41601f612c45b1f13c442b413e5d9b94290c6050c79cfc54b"} Oct 14 06:59:19 crc kubenswrapper[5058]: I1014 06:59:19.565758 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerStarted","Data":"8addf4f5620efe0c74fbb8b264019c0cf18a62e05fb86cbc71c29ad676f0daa7"} Oct 14 06:59:21 crc kubenswrapper[5058]: I1014 06:59:21.582064 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" event={"ID":"8b17cd49-9ba9-4e4d-af25-028d608ae48f","Type":"ContainerStarted","Data":"9a615f93f49609e878c6ef2cc2297662a32748863fa775ec0b07969dfa0790ea"} Oct 14 06:59:21 crc kubenswrapper[5058]: I1014 06:59:21.582388 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:21 crc kubenswrapper[5058]: I1014 06:59:21.582406 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:21 crc kubenswrapper[5058]: I1014 06:59:21.582419 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:21 crc kubenswrapper[5058]: I1014 06:59:21.612832 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" podStartSLOduration=7.612788324 podStartE2EDuration="7.612788324s" podCreationTimestamp="2025-10-14 06:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:59:21.611451497 +0000 UTC m=+709.522535323" watchObservedRunningTime="2025-10-14 06:59:21.612788324 +0000 UTC m=+709.523872140" Oct 14 06:59:21 crc kubenswrapper[5058]: I1014 06:59:21.613135 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:21 crc kubenswrapper[5058]: I1014 06:59:21.617995 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.192525 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-zpvzc"] Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.193899 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.197900 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.198738 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.200187 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.200712 5058 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nmkt2" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.201881 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-zpvzc"] Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.313525 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46b22025-6ec3-4bd1-9da1-4ee638664213-node-mnt\") pod \"crc-storage-crc-zpvzc\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.313627 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46b22025-6ec3-4bd1-9da1-4ee638664213-crc-storage\") pod \"crc-storage-crc-zpvzc\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.313685 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw585\" (UniqueName: \"kubernetes.io/projected/46b22025-6ec3-4bd1-9da1-4ee638664213-kube-api-access-xw585\") pod \"crc-storage-crc-zpvzc\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.414873 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46b22025-6ec3-4bd1-9da1-4ee638664213-crc-storage\") pod \"crc-storage-crc-zpvzc\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.414976 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw585\" (UniqueName: \"kubernetes.io/projected/46b22025-6ec3-4bd1-9da1-4ee638664213-kube-api-access-xw585\") pod \"crc-storage-crc-zpvzc\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.415016 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46b22025-6ec3-4bd1-9da1-4ee638664213-node-mnt\") pod \"crc-storage-crc-zpvzc\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.415322 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46b22025-6ec3-4bd1-9da1-4ee638664213-node-mnt\") pod \"crc-storage-crc-zpvzc\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.416623 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46b22025-6ec3-4bd1-9da1-4ee638664213-crc-storage\") pod \"crc-storage-crc-zpvzc\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.445694 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw585\" (UniqueName: \"kubernetes.io/projected/46b22025-6ec3-4bd1-9da1-4ee638664213-kube-api-access-xw585\") pod \"crc-storage-crc-zpvzc\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.517651 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: E1014 06:59:24.557282 5058 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(4e6ae5bd25560041eb824642c9743e9ce5c542d5c48339a2e57c5ef6ac82950e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 06:59:24 crc kubenswrapper[5058]: E1014 06:59:24.557373 5058 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(4e6ae5bd25560041eb824642c9743e9ce5c542d5c48339a2e57c5ef6ac82950e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: E1014 06:59:24.557408 5058 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(4e6ae5bd25560041eb824642c9743e9ce5c542d5c48339a2e57c5ef6ac82950e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: E1014 06:59:24.557479 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-zpvzc_crc-storage(46b22025-6ec3-4bd1-9da1-4ee638664213)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-zpvzc_crc-storage(46b22025-6ec3-4bd1-9da1-4ee638664213)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(4e6ae5bd25560041eb824642c9743e9ce5c542d5c48339a2e57c5ef6ac82950e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-zpvzc" podUID="46b22025-6ec3-4bd1-9da1-4ee638664213" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.600297 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: I1014 06:59:24.600924 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: E1014 06:59:24.644190 5058 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(da07f6c989c32d1aa21da09d35942b19daa9501be6b8fd8af0bcda5c2d642b16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 06:59:24 crc kubenswrapper[5058]: E1014 06:59:24.644279 5058 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(da07f6c989c32d1aa21da09d35942b19daa9501be6b8fd8af0bcda5c2d642b16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: E1014 06:59:24.644353 5058 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(da07f6c989c32d1aa21da09d35942b19daa9501be6b8fd8af0bcda5c2d642b16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:24 crc kubenswrapper[5058]: E1014 06:59:24.644427 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-zpvzc_crc-storage(46b22025-6ec3-4bd1-9da1-4ee638664213)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-zpvzc_crc-storage(46b22025-6ec3-4bd1-9da1-4ee638664213)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(da07f6c989c32d1aa21da09d35942b19daa9501be6b8fd8af0bcda5c2d642b16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-zpvzc" podUID="46b22025-6ec3-4bd1-9da1-4ee638664213" Oct 14 06:59:25 crc kubenswrapper[5058]: I1014 06:59:25.790401 5058 scope.go:117] "RemoveContainer" containerID="eb987a24e03365652527e60dceb65eb0041958bd107363d333033b2b6cb45081" Oct 14 06:59:25 crc kubenswrapper[5058]: E1014 06:59:25.790563 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-csl4q_openshift-multus(1288bab5-7372-4acc-963c-6232b27a7975)\"" pod="openshift-multus/multus-csl4q" podUID="1288bab5-7372-4acc-963c-6232b27a7975" Oct 14 06:59:33 crc kubenswrapper[5058]: I1014 06:59:33.656342 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 06:59:33 crc kubenswrapper[5058]: I1014 06:59:33.657131 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 06:59:37 crc kubenswrapper[5058]: I1014 06:59:37.789642 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:37 crc kubenswrapper[5058]: I1014 06:59:37.790745 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:37 crc kubenswrapper[5058]: E1014 06:59:37.833934 5058 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(c6db6d7c3f1626a510a920f58cdb4099131575bbab491581faefb1c4276c5499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 06:59:37 crc kubenswrapper[5058]: E1014 06:59:37.834072 5058 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(c6db6d7c3f1626a510a920f58cdb4099131575bbab491581faefb1c4276c5499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:37 crc kubenswrapper[5058]: E1014 06:59:37.834123 5058 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(c6db6d7c3f1626a510a920f58cdb4099131575bbab491581faefb1c4276c5499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:37 crc kubenswrapper[5058]: E1014 06:59:37.834228 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-zpvzc_crc-storage(46b22025-6ec3-4bd1-9da1-4ee638664213)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-zpvzc_crc-storage(46b22025-6ec3-4bd1-9da1-4ee638664213)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-zpvzc_crc-storage_46b22025-6ec3-4bd1-9da1-4ee638664213_0(c6db6d7c3f1626a510a920f58cdb4099131575bbab491581faefb1c4276c5499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-zpvzc" podUID="46b22025-6ec3-4bd1-9da1-4ee638664213" Oct 14 06:59:38 crc kubenswrapper[5058]: I1014 06:59:38.790532 5058 scope.go:117] "RemoveContainer" containerID="eb987a24e03365652527e60dceb65eb0041958bd107363d333033b2b6cb45081" Oct 14 06:59:39 crc kubenswrapper[5058]: I1014 06:59:39.708728 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-csl4q_1288bab5-7372-4acc-963c-6232b27a7975/kube-multus/2.log" Oct 14 06:59:39 crc kubenswrapper[5058]: I1014 06:59:39.709169 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-csl4q" event={"ID":"1288bab5-7372-4acc-963c-6232b27a7975","Type":"ContainerStarted","Data":"4bd2d8829201c524f1e7501e477dd02abf31e387d41d76cf1c1232801b6015ec"} Oct 14 06:59:44 crc kubenswrapper[5058]: I1014 06:59:44.929206 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xnc59" Oct 14 06:59:50 crc kubenswrapper[5058]: I1014 06:59:50.789234 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:50 crc kubenswrapper[5058]: I1014 06:59:50.789749 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:51 crc kubenswrapper[5058]: I1014 06:59:51.065671 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-zpvzc"] Oct 14 06:59:51 crc kubenswrapper[5058]: I1014 06:59:51.076322 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 06:59:51 crc kubenswrapper[5058]: I1014 06:59:51.792746 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-zpvzc" event={"ID":"46b22025-6ec3-4bd1-9da1-4ee638664213","Type":"ContainerStarted","Data":"60901abaf47b15ac594e930f66eec15261bb8f60fcc017f30993ea4fbee59ebb"} Oct 14 06:59:52 crc kubenswrapper[5058]: I1014 06:59:52.802369 5058 generic.go:334] "Generic (PLEG): container finished" podID="46b22025-6ec3-4bd1-9da1-4ee638664213" containerID="2814f53bbc9e56c00725c9e922d34cb8904cb61d98a5955178f1d41aa429a203" exitCode=0 Oct 14 06:59:52 crc kubenswrapper[5058]: I1014 06:59:52.802784 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-zpvzc" event={"ID":"46b22025-6ec3-4bd1-9da1-4ee638664213","Type":"ContainerDied","Data":"2814f53bbc9e56c00725c9e922d34cb8904cb61d98a5955178f1d41aa429a203"} Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.154632 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.334255 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46b22025-6ec3-4bd1-9da1-4ee638664213-node-mnt\") pod \"46b22025-6ec3-4bd1-9da1-4ee638664213\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.334340 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46b22025-6ec3-4bd1-9da1-4ee638664213-crc-storage\") pod \"46b22025-6ec3-4bd1-9da1-4ee638664213\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.334351 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46b22025-6ec3-4bd1-9da1-4ee638664213-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "46b22025-6ec3-4bd1-9da1-4ee638664213" (UID: "46b22025-6ec3-4bd1-9da1-4ee638664213"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.334400 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw585\" (UniqueName: \"kubernetes.io/projected/46b22025-6ec3-4bd1-9da1-4ee638664213-kube-api-access-xw585\") pod \"46b22025-6ec3-4bd1-9da1-4ee638664213\" (UID: \"46b22025-6ec3-4bd1-9da1-4ee638664213\") " Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.334860 5058 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/46b22025-6ec3-4bd1-9da1-4ee638664213-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.344838 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b22025-6ec3-4bd1-9da1-4ee638664213-kube-api-access-xw585" (OuterVolumeSpecName: "kube-api-access-xw585") pod "46b22025-6ec3-4bd1-9da1-4ee638664213" (UID: "46b22025-6ec3-4bd1-9da1-4ee638664213"). InnerVolumeSpecName "kube-api-access-xw585". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.350331 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b22025-6ec3-4bd1-9da1-4ee638664213-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "46b22025-6ec3-4bd1-9da1-4ee638664213" (UID: "46b22025-6ec3-4bd1-9da1-4ee638664213"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.436160 5058 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/46b22025-6ec3-4bd1-9da1-4ee638664213-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.436193 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw585\" (UniqueName: \"kubernetes.io/projected/46b22025-6ec3-4bd1-9da1-4ee638664213-kube-api-access-xw585\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.816288 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-zpvzc" Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.816276 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-zpvzc" event={"ID":"46b22025-6ec3-4bd1-9da1-4ee638664213","Type":"ContainerDied","Data":"60901abaf47b15ac594e930f66eec15261bb8f60fcc017f30993ea4fbee59ebb"} Oct 14 06:59:54 crc kubenswrapper[5058]: I1014 06:59:54.816411 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60901abaf47b15ac594e930f66eec15261bb8f60fcc017f30993ea4fbee59ebb" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.063742 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mdp6d"] Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.064291 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" podUID="e09468c0-bfc1-42fe-918d-4068041806c0" containerName="controller-manager" containerID="cri-o://568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb" gracePeriod=30 Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.224505 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw"] Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.224824 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" podUID="56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" containerName="route-controller-manager" containerID="cri-o://49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab" gracePeriod=30 Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.432394 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.525827 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-proxy-ca-bundles\") pod \"e09468c0-bfc1-42fe-918d-4068041806c0\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.525869 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-config\") pod \"e09468c0-bfc1-42fe-918d-4068041806c0\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.525905 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e09468c0-bfc1-42fe-918d-4068041806c0-serving-cert\") pod \"e09468c0-bfc1-42fe-918d-4068041806c0\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.525923 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqxfd\" (UniqueName: \"kubernetes.io/projected/e09468c0-bfc1-42fe-918d-4068041806c0-kube-api-access-gqxfd\") pod \"e09468c0-bfc1-42fe-918d-4068041806c0\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.525960 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-client-ca\") pod \"e09468c0-bfc1-42fe-918d-4068041806c0\" (UID: \"e09468c0-bfc1-42fe-918d-4068041806c0\") " Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.526766 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "e09468c0-bfc1-42fe-918d-4068041806c0" (UID: "e09468c0-bfc1-42fe-918d-4068041806c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.526761 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e09468c0-bfc1-42fe-918d-4068041806c0" (UID: "e09468c0-bfc1-42fe-918d-4068041806c0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.526824 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-config" (OuterVolumeSpecName: "config") pod "e09468c0-bfc1-42fe-918d-4068041806c0" (UID: "e09468c0-bfc1-42fe-918d-4068041806c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.533658 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e09468c0-bfc1-42fe-918d-4068041806c0-kube-api-access-gqxfd" (OuterVolumeSpecName: "kube-api-access-gqxfd") pod "e09468c0-bfc1-42fe-918d-4068041806c0" (UID: "e09468c0-bfc1-42fe-918d-4068041806c0"). InnerVolumeSpecName "kube-api-access-gqxfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.547251 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09468c0-bfc1-42fe-918d-4068041806c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e09468c0-bfc1-42fe-918d-4068041806c0" (UID: "e09468c0-bfc1-42fe-918d-4068041806c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.559207 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.627312 5058 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.627354 5058 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.627368 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e09468c0-bfc1-42fe-918d-4068041806c0-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.627377 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e09468c0-bfc1-42fe-918d-4068041806c0-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.627389 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqxfd\" (UniqueName: \"kubernetes.io/projected/e09468c0-bfc1-42fe-918d-4068041806c0-kube-api-access-gqxfd\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.728005 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t584j\" (UniqueName: \"kubernetes.io/projected/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-kube-api-access-t584j\") pod \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.728097 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-config\") pod \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.728169 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-client-ca\") pod \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.728222 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-serving-cert\") pod \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\" (UID: \"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9\") " Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.728783 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-config" (OuterVolumeSpecName: "config") pod "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" (UID: "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.728909 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-client-ca" (OuterVolumeSpecName: "client-ca") pod "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" (UID: "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.733361 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" (UID: "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.733915 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-kube-api-access-t584j" (OuterVolumeSpecName: "kube-api-access-t584j") pod "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" (UID: "56f2ec8d-e5f5-41f2-8440-0b771ff55ee9"). InnerVolumeSpecName "kube-api-access-t584j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.829012 5058 generic.go:334] "Generic (PLEG): container finished" podID="e09468c0-bfc1-42fe-918d-4068041806c0" containerID="568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb" exitCode=0 Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.829079 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.829093 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" event={"ID":"e09468c0-bfc1-42fe-918d-4068041806c0","Type":"ContainerDied","Data":"568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb"} Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.829228 5058 scope.go:117] "RemoveContainer" containerID="568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.829324 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mdp6d" event={"ID":"e09468c0-bfc1-42fe-918d-4068041806c0","Type":"ContainerDied","Data":"d1bbf61b076f654ae38cf4e1efa7b041f7ce2a97a8a69dd0cb165796a97be110"} Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.829572 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-config\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.829586 5058 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.829597 5058 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.829623 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t584j\" (UniqueName: \"kubernetes.io/projected/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9-kube-api-access-t584j\") on node \"crc\" DevicePath \"\"" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.831311 5058 generic.go:334] "Generic (PLEG): container finished" podID="56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" containerID="49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab" exitCode=0 Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.831363 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" event={"ID":"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9","Type":"ContainerDied","Data":"49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab"} Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.831394 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" event={"ID":"56f2ec8d-e5f5-41f2-8440-0b771ff55ee9","Type":"ContainerDied","Data":"c8f3cf1594e4ade6f4486de57b8db88d7d781ac639495dd9c2990c8d944e880c"} Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.831367 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.845242 5058 scope.go:117] "RemoveContainer" containerID="568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb" Oct 14 06:59:56 crc kubenswrapper[5058]: E1014 06:59:56.845679 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb\": container with ID starting with 568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb not found: ID does not exist" containerID="568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.845717 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb"} err="failed to get container status \"568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb\": rpc error: code = NotFound desc = could not find container \"568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb\": container with ID starting with 568a869306c27f2528c9b08d7005eacb6a2a48237566029b5d4b05beddab09fb not found: ID does not exist" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.845736 5058 scope.go:117] "RemoveContainer" containerID="49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.850718 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mdp6d"] Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.850756 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mdp6d"] Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.859973 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw"] Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.863041 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-79mpw"] Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.863636 5058 scope.go:117] "RemoveContainer" containerID="49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab" Oct 14 06:59:56 crc kubenswrapper[5058]: E1014 06:59:56.864141 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab\": container with ID starting with 49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab not found: ID does not exist" containerID="49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab" Oct 14 06:59:56 crc kubenswrapper[5058]: I1014 06:59:56.864176 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab"} err="failed to get container status \"49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab\": rpc error: code = NotFound desc = could not find container \"49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab\": container with ID starting with 49fd823f69f91329398a75e505bd869f238ca5b0b54ddacef76979396d7771ab not found: ID does not exist" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.933949 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68c7749744-2dsgc"] Oct 14 06:59:57 crc kubenswrapper[5058]: E1014 06:59:57.934686 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" containerName="route-controller-manager" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.934707 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" containerName="route-controller-manager" Oct 14 06:59:57 crc kubenswrapper[5058]: E1014 06:59:57.934728 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09468c0-bfc1-42fe-918d-4068041806c0" containerName="controller-manager" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.934741 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09468c0-bfc1-42fe-918d-4068041806c0" containerName="controller-manager" Oct 14 06:59:57 crc kubenswrapper[5058]: E1014 06:59:57.934767 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b22025-6ec3-4bd1-9da1-4ee638664213" containerName="storage" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.934779 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b22025-6ec3-4bd1-9da1-4ee638664213" containerName="storage" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.934984 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09468c0-bfc1-42fe-918d-4068041806c0" containerName="controller-manager" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.935003 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b22025-6ec3-4bd1-9da1-4ee638664213" containerName="storage" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.935019 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" containerName="route-controller-manager" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.935559 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.938288 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.939429 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.939615 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.939822 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.940243 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.946379 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-config\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.946472 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-serving-cert\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.946504 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-proxy-ca-bundles\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.946543 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzmkh\" (UniqueName: \"kubernetes.io/projected/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-kube-api-access-bzmkh\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.946565 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-client-ca\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.951180 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd"] Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.952624 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.958877 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.958926 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.959092 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.959110 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.959318 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.959908 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.960329 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.960513 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.964352 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd"] Oct 14 06:59:57 crc kubenswrapper[5058]: I1014 06:59:57.965666 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68c7749744-2dsgc"] Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.049119 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-proxy-ca-bundles\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.049537 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzmkh\" (UniqueName: \"kubernetes.io/projected/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-kube-api-access-bzmkh\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.049619 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-client-ca\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.049668 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2c9a95-a0db-407b-9063-c250df84beaf-serving-cert\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.049709 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-config\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.049778 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2c9a95-a0db-407b-9063-c250df84beaf-config\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.049851 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e2c9a95-a0db-407b-9063-c250df84beaf-client-ca\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.049893 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rpzw\" (UniqueName: \"kubernetes.io/projected/7e2c9a95-a0db-407b-9063-c250df84beaf-kube-api-access-8rpzw\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.049950 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-serving-cert\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.052636 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-proxy-ca-bundles\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.052668 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-config\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.056653 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-client-ca\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.057574 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-serving-cert\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.071607 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzmkh\" (UniqueName: \"kubernetes.io/projected/49e97f67-a83a-4bd4-9c3f-ab793ff33c29-kube-api-access-bzmkh\") pod \"controller-manager-68c7749744-2dsgc\" (UID: \"49e97f67-a83a-4bd4-9c3f-ab793ff33c29\") " pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.151173 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2c9a95-a0db-407b-9063-c250df84beaf-serving-cert\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.151459 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2c9a95-a0db-407b-9063-c250df84beaf-config\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.151558 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e2c9a95-a0db-407b-9063-c250df84beaf-client-ca\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.151651 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rpzw\" (UniqueName: \"kubernetes.io/projected/7e2c9a95-a0db-407b-9063-c250df84beaf-kube-api-access-8rpzw\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.152939 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e2c9a95-a0db-407b-9063-c250df84beaf-client-ca\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.153336 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2c9a95-a0db-407b-9063-c250df84beaf-config\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.157594 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2c9a95-a0db-407b-9063-c250df84beaf-serving-cert\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.172236 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rpzw\" (UniqueName: \"kubernetes.io/projected/7e2c9a95-a0db-407b-9063-c250df84beaf-kube-api-access-8rpzw\") pod \"route-controller-manager-8c4b64bb-m59bd\" (UID: \"7e2c9a95-a0db-407b-9063-c250df84beaf\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.254159 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.272590 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.507878 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68c7749744-2dsgc"] Oct 14 06:59:58 crc kubenswrapper[5058]: W1014 06:59:58.514090 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49e97f67_a83a_4bd4_9c3f_ab793ff33c29.slice/crio-b155d5fb09c9ef2517cbb7e6b6a24e4148533139b21d6a2cdab3c814444df92a WatchSource:0}: Error finding container b155d5fb09c9ef2517cbb7e6b6a24e4148533139b21d6a2cdab3c814444df92a: Status 404 returned error can't find the container with id b155d5fb09c9ef2517cbb7e6b6a24e4148533139b21d6a2cdab3c814444df92a Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.532470 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd"] Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.795582 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f2ec8d-e5f5-41f2-8440-0b771ff55ee9" path="/var/lib/kubelet/pods/56f2ec8d-e5f5-41f2-8440-0b771ff55ee9/volumes" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.796288 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e09468c0-bfc1-42fe-918d-4068041806c0" path="/var/lib/kubelet/pods/e09468c0-bfc1-42fe-918d-4068041806c0/volumes" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.849531 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" event={"ID":"7e2c9a95-a0db-407b-9063-c250df84beaf","Type":"ContainerStarted","Data":"74f5d9428153016ddea6e1566d7543c2f5b785f594baa7f11c36ab3cc158e6c8"} Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.849583 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" event={"ID":"7e2c9a95-a0db-407b-9063-c250df84beaf","Type":"ContainerStarted","Data":"8441eb469ede93f4975356c8fe0cbce6293fa22338bb7100816df1e4f5419d33"} Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.850607 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.852427 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" event={"ID":"49e97f67-a83a-4bd4-9c3f-ab793ff33c29","Type":"ContainerStarted","Data":"addbe2c9ac6b6bf525555f72f23d10d78918b5951e89c4731066a899f9df37a3"} Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.852467 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" event={"ID":"49e97f67-a83a-4bd4-9c3f-ab793ff33c29","Type":"ContainerStarted","Data":"b155d5fb09c9ef2517cbb7e6b6a24e4148533139b21d6a2cdab3c814444df92a"} Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.853041 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.858572 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.869762 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" podStartSLOduration=2.8697440199999997 podStartE2EDuration="2.86974402s" podCreationTimestamp="2025-10-14 06:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:59:58.865601593 +0000 UTC m=+746.776685399" watchObservedRunningTime="2025-10-14 06:59:58.86974402 +0000 UTC m=+746.780827826" Oct 14 06:59:58 crc kubenswrapper[5058]: I1014 06:59:58.893227 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68c7749744-2dsgc" podStartSLOduration=2.893213652 podStartE2EDuration="2.893213652s" podCreationTimestamp="2025-10-14 06:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 06:59:58.889575809 +0000 UTC m=+746.800659625" watchObservedRunningTime="2025-10-14 06:59:58.893213652 +0000 UTC m=+746.804297458" Oct 14 06:59:59 crc kubenswrapper[5058]: I1014 06:59:59.153637 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8c4b64bb-m59bd" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.141024 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g"] Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.141862 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.144049 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.144078 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.161348 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g"] Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.176533 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-config-volume\") pod \"collect-profiles-29340420-r8j8g\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.176576 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg94l\" (UniqueName: \"kubernetes.io/projected/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-kube-api-access-wg94l\") pod \"collect-profiles-29340420-r8j8g\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.176613 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-secret-volume\") pod \"collect-profiles-29340420-r8j8g\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.278262 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-config-volume\") pod \"collect-profiles-29340420-r8j8g\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.278336 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg94l\" (UniqueName: \"kubernetes.io/projected/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-kube-api-access-wg94l\") pod \"collect-profiles-29340420-r8j8g\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.278377 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-secret-volume\") pod \"collect-profiles-29340420-r8j8g\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.279987 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-config-volume\") pod \"collect-profiles-29340420-r8j8g\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.286508 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-secret-volume\") pod \"collect-profiles-29340420-r8j8g\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.301903 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg94l\" (UniqueName: \"kubernetes.io/projected/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-kube-api-access-wg94l\") pod \"collect-profiles-29340420-r8j8g\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.473692 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:00 crc kubenswrapper[5058]: I1014 07:00:00.945715 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g"] Oct 14 07:00:00 crc kubenswrapper[5058]: W1014 07:00:00.955729 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdcfdfb7_2c52_4e13_919c_d63a7ef2f111.slice/crio-1085dc2f258727775f4f044764e4baf22c9db986d6e7d6f48b007476dd8e87e7 WatchSource:0}: Error finding container 1085dc2f258727775f4f044764e4baf22c9db986d6e7d6f48b007476dd8e87e7: Status 404 returned error can't find the container with id 1085dc2f258727775f4f044764e4baf22c9db986d6e7d6f48b007476dd8e87e7 Oct 14 07:00:01 crc kubenswrapper[5058]: I1014 07:00:01.871996 5058 generic.go:334] "Generic (PLEG): container finished" podID="cdcfdfb7-2c52-4e13-919c-d63a7ef2f111" containerID="dcd81169984a0bfe708691a602bcb12d2d2d7d6638201820ee6b92411c854a7a" exitCode=0 Oct 14 07:00:01 crc kubenswrapper[5058]: I1014 07:00:01.872131 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" event={"ID":"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111","Type":"ContainerDied","Data":"dcd81169984a0bfe708691a602bcb12d2d2d7d6638201820ee6b92411c854a7a"} Oct 14 07:00:01 crc kubenswrapper[5058]: I1014 07:00:01.874003 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" event={"ID":"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111","Type":"ContainerStarted","Data":"1085dc2f258727775f4f044764e4baf22c9db986d6e7d6f48b007476dd8e87e7"} Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.189987 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw"] Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.190983 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.196426 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.200065 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw"] Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.257436 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.257590 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtbh\" (UniqueName: \"kubernetes.io/projected/ba6d3dfa-60c7-482a-8452-c3cafc29897e-kube-api-access-hjtbh\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.257730 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.359413 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.359482 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.359513 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtbh\" (UniqueName: \"kubernetes.io/projected/ba6d3dfa-60c7-482a-8452-c3cafc29897e-kube-api-access-hjtbh\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.360017 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.360154 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.402249 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtbh\" (UniqueName: \"kubernetes.io/projected/ba6d3dfa-60c7-482a-8452-c3cafc29897e-kube-api-access-hjtbh\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.561015 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.849193 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw"] Oct 14 07:00:02 crc kubenswrapper[5058]: I1014 07:00:02.882092 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" event={"ID":"ba6d3dfa-60c7-482a-8452-c3cafc29897e","Type":"ContainerStarted","Data":"253018b19e3982567bfecfd3d9cc616b20e6ccd97c1fb049593adc1216c9e7a1"} Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.143523 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.270966 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-config-volume\") pod \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.271060 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-secret-volume\") pod \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.271172 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg94l\" (UniqueName: \"kubernetes.io/projected/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-kube-api-access-wg94l\") pod \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\" (UID: \"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111\") " Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.272429 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-config-volume" (OuterVolumeSpecName: "config-volume") pod "cdcfdfb7-2c52-4e13-919c-d63a7ef2f111" (UID: "cdcfdfb7-2c52-4e13-919c-d63a7ef2f111"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.276434 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cdcfdfb7-2c52-4e13-919c-d63a7ef2f111" (UID: "cdcfdfb7-2c52-4e13-919c-d63a7ef2f111"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.277386 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-kube-api-access-wg94l" (OuterVolumeSpecName: "kube-api-access-wg94l") pod "cdcfdfb7-2c52-4e13-919c-d63a7ef2f111" (UID: "cdcfdfb7-2c52-4e13-919c-d63a7ef2f111"). InnerVolumeSpecName "kube-api-access-wg94l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.373490 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.373543 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.373565 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg94l\" (UniqueName: \"kubernetes.io/projected/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111-kube-api-access-wg94l\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.655933 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.656013 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.656074 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.656936 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e300c99e9aa5760363433f01fb5f81849230f77f3ce1c9de5aa4bba11aefb08d"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.657085 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://e300c99e9aa5760363433f01fb5f81849230f77f3ce1c9de5aa4bba11aefb08d" gracePeriod=600 Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.896975 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" event={"ID":"cdcfdfb7-2c52-4e13-919c-d63a7ef2f111","Type":"ContainerDied","Data":"1085dc2f258727775f4f044764e4baf22c9db986d6e7d6f48b007476dd8e87e7"} Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.897491 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1085dc2f258727775f4f044764e4baf22c9db986d6e7d6f48b007476dd8e87e7" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.897015 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g" Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.899318 5058 generic.go:334] "Generic (PLEG): container finished" podID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerID="b17ce2abe5323bd3fa26d69ef9b59763e799e6da8297f0e51abbb6737302ff54" exitCode=0 Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.899382 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" event={"ID":"ba6d3dfa-60c7-482a-8452-c3cafc29897e","Type":"ContainerDied","Data":"b17ce2abe5323bd3fa26d69ef9b59763e799e6da8297f0e51abbb6737302ff54"} Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.906242 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="e300c99e9aa5760363433f01fb5f81849230f77f3ce1c9de5aa4bba11aefb08d" exitCode=0 Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.906300 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"e300c99e9aa5760363433f01fb5f81849230f77f3ce1c9de5aa4bba11aefb08d"} Oct 14 07:00:03 crc kubenswrapper[5058]: I1014 07:00:03.906418 5058 scope.go:117] "RemoveContainer" containerID="03471caec5a270f474cd1f8380fe5ceb96fa9df94e3644b8d0ac5d170f514368" Oct 14 07:00:04 crc kubenswrapper[5058]: I1014 07:00:04.918598 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"a65ec9f6646d74e8b5c645d5e10f222dfaf2fbd7a5ae9280134ea9c9569464ad"} Oct 14 07:00:05 crc kubenswrapper[5058]: I1014 07:00:05.926967 5058 generic.go:334] "Generic (PLEG): container finished" podID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerID="33c85b94ca7816468262f97a39c6bef82a29989df9689ec3930715dae827601a" exitCode=0 Oct 14 07:00:05 crc kubenswrapper[5058]: I1014 07:00:05.927120 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" event={"ID":"ba6d3dfa-60c7-482a-8452-c3cafc29897e","Type":"ContainerDied","Data":"33c85b94ca7816468262f97a39c6bef82a29989df9689ec3930715dae827601a"} Oct 14 07:00:06 crc kubenswrapper[5058]: I1014 07:00:06.938350 5058 generic.go:334] "Generic (PLEG): container finished" podID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerID="dc6c9db2ffd993e2f2a83bc917dc2ace0a5bc541719ab6b30766b915e0f3a9e1" exitCode=0 Oct 14 07:00:06 crc kubenswrapper[5058]: I1014 07:00:06.938458 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" event={"ID":"ba6d3dfa-60c7-482a-8452-c3cafc29897e","Type":"ContainerDied","Data":"dc6c9db2ffd993e2f2a83bc917dc2ace0a5bc541719ab6b30766b915e0f3a9e1"} Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.427068 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.498604 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bcwj7"] Oct 14 07:00:08 crc kubenswrapper[5058]: E1014 07:00:08.498817 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerName="extract" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.498830 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerName="extract" Oct 14 07:00:08 crc kubenswrapper[5058]: E1014 07:00:08.498841 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcfdfb7-2c52-4e13-919c-d63a7ef2f111" containerName="collect-profiles" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.498847 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcfdfb7-2c52-4e13-919c-d63a7ef2f111" containerName="collect-profiles" Oct 14 07:00:08 crc kubenswrapper[5058]: E1014 07:00:08.498862 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerName="pull" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.498869 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerName="pull" Oct 14 07:00:08 crc kubenswrapper[5058]: E1014 07:00:08.498879 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerName="util" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.498885 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerName="util" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.498982 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba6d3dfa-60c7-482a-8452-c3cafc29897e" containerName="extract" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.499000 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcfdfb7-2c52-4e13-919c-d63a7ef2f111" containerName="collect-profiles" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.499833 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.507977 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcwj7"] Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.531908 5058 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.550690 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-util\") pod \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.550816 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjtbh\" (UniqueName: \"kubernetes.io/projected/ba6d3dfa-60c7-482a-8452-c3cafc29897e-kube-api-access-hjtbh\") pod \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.550868 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-bundle\") pod \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\" (UID: \"ba6d3dfa-60c7-482a-8452-c3cafc29897e\") " Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.551052 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-utilities\") pod \"redhat-operators-bcwj7\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.551085 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-catalog-content\") pod \"redhat-operators-bcwj7\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.551182 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbqhr\" (UniqueName: \"kubernetes.io/projected/45af6609-b855-4d8a-9e99-38505fade914-kube-api-access-pbqhr\") pod \"redhat-operators-bcwj7\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.551997 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-bundle" (OuterVolumeSpecName: "bundle") pod "ba6d3dfa-60c7-482a-8452-c3cafc29897e" (UID: "ba6d3dfa-60c7-482a-8452-c3cafc29897e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.559889 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6d3dfa-60c7-482a-8452-c3cafc29897e-kube-api-access-hjtbh" (OuterVolumeSpecName: "kube-api-access-hjtbh") pod "ba6d3dfa-60c7-482a-8452-c3cafc29897e" (UID: "ba6d3dfa-60c7-482a-8452-c3cafc29897e"). InnerVolumeSpecName "kube-api-access-hjtbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.569122 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-util" (OuterVolumeSpecName: "util") pod "ba6d3dfa-60c7-482a-8452-c3cafc29897e" (UID: "ba6d3dfa-60c7-482a-8452-c3cafc29897e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.652083 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-utilities\") pod \"redhat-operators-bcwj7\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.652125 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-catalog-content\") pod \"redhat-operators-bcwj7\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.652183 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbqhr\" (UniqueName: \"kubernetes.io/projected/45af6609-b855-4d8a-9e99-38505fade914-kube-api-access-pbqhr\") pod \"redhat-operators-bcwj7\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.652234 5058 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-util\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.652247 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjtbh\" (UniqueName: \"kubernetes.io/projected/ba6d3dfa-60c7-482a-8452-c3cafc29897e-kube-api-access-hjtbh\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.652257 5058 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba6d3dfa-60c7-482a-8452-c3cafc29897e-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.652769 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-utilities\") pod \"redhat-operators-bcwj7\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.652873 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-catalog-content\") pod \"redhat-operators-bcwj7\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.677936 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbqhr\" (UniqueName: \"kubernetes.io/projected/45af6609-b855-4d8a-9e99-38505fade914-kube-api-access-pbqhr\") pod \"redhat-operators-bcwj7\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.820036 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.952972 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" event={"ID":"ba6d3dfa-60c7-482a-8452-c3cafc29897e","Type":"ContainerDied","Data":"253018b19e3982567bfecfd3d9cc616b20e6ccd97c1fb049593adc1216c9e7a1"} Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.953009 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253018b19e3982567bfecfd3d9cc616b20e6ccd97c1fb049593adc1216c9e7a1" Oct 14 07:00:08 crc kubenswrapper[5058]: I1014 07:00:08.953068 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw" Oct 14 07:00:09 crc kubenswrapper[5058]: I1014 07:00:09.298044 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcwj7"] Oct 14 07:00:09 crc kubenswrapper[5058]: I1014 07:00:09.958836 5058 generic.go:334] "Generic (PLEG): container finished" podID="45af6609-b855-4d8a-9e99-38505fade914" containerID="0ea97533cb8ed8502463dc34fc8bb11e06834379c2ed89bd501f5b5122e7b921" exitCode=0 Oct 14 07:00:09 crc kubenswrapper[5058]: I1014 07:00:09.958969 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcwj7" event={"ID":"45af6609-b855-4d8a-9e99-38505fade914","Type":"ContainerDied","Data":"0ea97533cb8ed8502463dc34fc8bb11e06834379c2ed89bd501f5b5122e7b921"} Oct 14 07:00:09 crc kubenswrapper[5058]: I1014 07:00:09.959310 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcwj7" event={"ID":"45af6609-b855-4d8a-9e99-38505fade914","Type":"ContainerStarted","Data":"9f68b60970ca28f546b58dce89565687902e111d50e0a9525918975a74a0e378"} Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.635970 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-wq86f"] Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.638026 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-wq86f" Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.641532 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wk58n" Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.642061 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.642112 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.658237 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-wq86f"] Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.791250 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6j8\" (UniqueName: \"kubernetes.io/projected/dcecec56-a68a-42e8-b6d0-2898aac52019-kube-api-access-5w6j8\") pod \"nmstate-operator-858ddd8f98-wq86f\" (UID: \"dcecec56-a68a-42e8-b6d0-2898aac52019\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-wq86f" Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.892632 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6j8\" (UniqueName: \"kubernetes.io/projected/dcecec56-a68a-42e8-b6d0-2898aac52019-kube-api-access-5w6j8\") pod \"nmstate-operator-858ddd8f98-wq86f\" (UID: \"dcecec56-a68a-42e8-b6d0-2898aac52019\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-wq86f" Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.914525 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6j8\" (UniqueName: \"kubernetes.io/projected/dcecec56-a68a-42e8-b6d0-2898aac52019-kube-api-access-5w6j8\") pod \"nmstate-operator-858ddd8f98-wq86f\" (UID: \"dcecec56-a68a-42e8-b6d0-2898aac52019\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-wq86f" Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.960682 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-wq86f" Oct 14 07:00:10 crc kubenswrapper[5058]: I1014 07:00:10.976256 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcwj7" event={"ID":"45af6609-b855-4d8a-9e99-38505fade914","Type":"ContainerStarted","Data":"456d9c12257963aca0374b9bbbbc2d9a385d4ef5dfa0dcd1271c264c0d413935"} Oct 14 07:00:11 crc kubenswrapper[5058]: I1014 07:00:11.413680 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-wq86f"] Oct 14 07:00:11 crc kubenswrapper[5058]: W1014 07:00:11.420513 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcecec56_a68a_42e8_b6d0_2898aac52019.slice/crio-684ed7e9e666a3db2a05252541be31aba7003c9bee306f9f7ebfa73c59015394 WatchSource:0}: Error finding container 684ed7e9e666a3db2a05252541be31aba7003c9bee306f9f7ebfa73c59015394: Status 404 returned error can't find the container with id 684ed7e9e666a3db2a05252541be31aba7003c9bee306f9f7ebfa73c59015394 Oct 14 07:00:11 crc kubenswrapper[5058]: I1014 07:00:11.985615 5058 generic.go:334] "Generic (PLEG): container finished" podID="45af6609-b855-4d8a-9e99-38505fade914" containerID="456d9c12257963aca0374b9bbbbc2d9a385d4ef5dfa0dcd1271c264c0d413935" exitCode=0 Oct 14 07:00:11 crc kubenswrapper[5058]: I1014 07:00:11.985718 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcwj7" event={"ID":"45af6609-b855-4d8a-9e99-38505fade914","Type":"ContainerDied","Data":"456d9c12257963aca0374b9bbbbc2d9a385d4ef5dfa0dcd1271c264c0d413935"} Oct 14 07:00:11 crc kubenswrapper[5058]: I1014 07:00:11.987459 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-wq86f" event={"ID":"dcecec56-a68a-42e8-b6d0-2898aac52019","Type":"ContainerStarted","Data":"684ed7e9e666a3db2a05252541be31aba7003c9bee306f9f7ebfa73c59015394"} Oct 14 07:00:13 crc kubenswrapper[5058]: I1014 07:00:13.000556 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcwj7" event={"ID":"45af6609-b855-4d8a-9e99-38505fade914","Type":"ContainerStarted","Data":"269be007a83b9f2d10191cf3d7d1aaa2f0757bb04ec083cec5f01f8e868cec01"} Oct 14 07:00:13 crc kubenswrapper[5058]: I1014 07:00:13.028497 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bcwj7" podStartSLOduration=2.450957238 podStartE2EDuration="5.028471336s" podCreationTimestamp="2025-10-14 07:00:08 +0000 UTC" firstStartedPulling="2025-10-14 07:00:09.96045805 +0000 UTC m=+757.871541856" lastFinishedPulling="2025-10-14 07:00:12.537972148 +0000 UTC m=+760.449055954" observedRunningTime="2025-10-14 07:00:13.020137551 +0000 UTC m=+760.931221387" watchObservedRunningTime="2025-10-14 07:00:13.028471336 +0000 UTC m=+760.939555172" Oct 14 07:00:15 crc kubenswrapper[5058]: I1014 07:00:15.015918 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-wq86f" event={"ID":"dcecec56-a68a-42e8-b6d0-2898aac52019","Type":"ContainerStarted","Data":"3126b8c3c74ba8ab6b6b4d8c19970972784dfc5dbd286e5537c506c2c49e112d"} Oct 14 07:00:15 crc kubenswrapper[5058]: I1014 07:00:15.040457 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-wq86f" podStartSLOduration=2.350725598 podStartE2EDuration="5.040435289s" podCreationTimestamp="2025-10-14 07:00:10 +0000 UTC" firstStartedPulling="2025-10-14 07:00:11.422860089 +0000 UTC m=+759.333943895" lastFinishedPulling="2025-10-14 07:00:14.11256978 +0000 UTC m=+762.023653586" observedRunningTime="2025-10-14 07:00:15.036677403 +0000 UTC m=+762.947761219" watchObservedRunningTime="2025-10-14 07:00:15.040435289 +0000 UTC m=+762.951519125" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.857031 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs"] Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.858507 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.862164 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9p56n" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.863881 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp"] Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.864706 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.874302 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.883860 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs"] Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.889787 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-cxwdw"] Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.890610 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.907559 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp"] Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.914505 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl44g\" (UniqueName: \"kubernetes.io/projected/0b1b03ec-3403-4067-91e8-c1255a5e60d9-kube-api-access-zl44g\") pod \"nmstate-metrics-fdff9cb8d-rl5gs\" (UID: \"0b1b03ec-3403-4067-91e8-c1255a5e60d9\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.914556 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5frl\" (UniqueName: \"kubernetes.io/projected/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-kube-api-access-b5frl\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.914576 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-dbus-socket\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.914609 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a8db929e-48b8-4330-9dd8-25e8744d8c79-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-pnfcp\" (UID: \"a8db929e-48b8-4330-9dd8-25e8744d8c79\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.914623 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-ovs-socket\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.914642 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-nmstate-lock\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.914660 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxswp\" (UniqueName: \"kubernetes.io/projected/a8db929e-48b8-4330-9dd8-25e8744d8c79-kube-api-access-nxswp\") pod \"nmstate-webhook-6cdbc54649-pnfcp\" (UID: \"a8db929e-48b8-4330-9dd8-25e8744d8c79\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.995081 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6"] Oct 14 07:00:17 crc kubenswrapper[5058]: I1014 07:00:17.996013 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.003195 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qzmn9" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.003621 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.003837 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.011422 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6"] Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015061 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a8db929e-48b8-4330-9dd8-25e8744d8c79-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-pnfcp\" (UID: \"a8db929e-48b8-4330-9dd8-25e8744d8c79\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015104 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-ovs-socket\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015124 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62f7h\" (UniqueName: \"kubernetes.io/projected/ebb2c83c-e257-49a9-8a93-54a6f218da22-kube-api-access-62f7h\") pod \"nmstate-console-plugin-6b874cbd85-9rgr6\" (UID: \"ebb2c83c-e257-49a9-8a93-54a6f218da22\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015149 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-nmstate-lock\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015169 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxswp\" (UniqueName: \"kubernetes.io/projected/a8db929e-48b8-4330-9dd8-25e8744d8c79-kube-api-access-nxswp\") pod \"nmstate-webhook-6cdbc54649-pnfcp\" (UID: \"a8db929e-48b8-4330-9dd8-25e8744d8c79\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015192 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl44g\" (UniqueName: \"kubernetes.io/projected/0b1b03ec-3403-4067-91e8-c1255a5e60d9-kube-api-access-zl44g\") pod \"nmstate-metrics-fdff9cb8d-rl5gs\" (UID: \"0b1b03ec-3403-4067-91e8-c1255a5e60d9\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015294 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebb2c83c-e257-49a9-8a93-54a6f218da22-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-9rgr6\" (UID: \"ebb2c83c-e257-49a9-8a93-54a6f218da22\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015312 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-nmstate-lock\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015342 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebb2c83c-e257-49a9-8a93-54a6f218da22-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-9rgr6\" (UID: \"ebb2c83c-e257-49a9-8a93-54a6f218da22\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015387 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-ovs-socket\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015401 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5frl\" (UniqueName: \"kubernetes.io/projected/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-kube-api-access-b5frl\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015475 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-dbus-socket\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.015848 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-dbus-socket\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.023284 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a8db929e-48b8-4330-9dd8-25e8744d8c79-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-pnfcp\" (UID: \"a8db929e-48b8-4330-9dd8-25e8744d8c79\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.038960 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5frl\" (UniqueName: \"kubernetes.io/projected/5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b-kube-api-access-b5frl\") pod \"nmstate-handler-cxwdw\" (UID: \"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b\") " pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.041295 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxswp\" (UniqueName: \"kubernetes.io/projected/a8db929e-48b8-4330-9dd8-25e8744d8c79-kube-api-access-nxswp\") pod \"nmstate-webhook-6cdbc54649-pnfcp\" (UID: \"a8db929e-48b8-4330-9dd8-25e8744d8c79\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.041337 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl44g\" (UniqueName: \"kubernetes.io/projected/0b1b03ec-3403-4067-91e8-c1255a5e60d9-kube-api-access-zl44g\") pod \"nmstate-metrics-fdff9cb8d-rl5gs\" (UID: \"0b1b03ec-3403-4067-91e8-c1255a5e60d9\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.116144 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebb2c83c-e257-49a9-8a93-54a6f218da22-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-9rgr6\" (UID: \"ebb2c83c-e257-49a9-8a93-54a6f218da22\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.116475 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebb2c83c-e257-49a9-8a93-54a6f218da22-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-9rgr6\" (UID: \"ebb2c83c-e257-49a9-8a93-54a6f218da22\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.116665 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62f7h\" (UniqueName: \"kubernetes.io/projected/ebb2c83c-e257-49a9-8a93-54a6f218da22-kube-api-access-62f7h\") pod \"nmstate-console-plugin-6b874cbd85-9rgr6\" (UID: \"ebb2c83c-e257-49a9-8a93-54a6f218da22\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.117218 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebb2c83c-e257-49a9-8a93-54a6f218da22-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-9rgr6\" (UID: \"ebb2c83c-e257-49a9-8a93-54a6f218da22\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.121615 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebb2c83c-e257-49a9-8a93-54a6f218da22-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-9rgr6\" (UID: \"ebb2c83c-e257-49a9-8a93-54a6f218da22\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.141550 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62f7h\" (UniqueName: \"kubernetes.io/projected/ebb2c83c-e257-49a9-8a93-54a6f218da22-kube-api-access-62f7h\") pod \"nmstate-console-plugin-6b874cbd85-9rgr6\" (UID: \"ebb2c83c-e257-49a9-8a93-54a6f218da22\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.195383 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b4898d7bb-hk6jf"] Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.196216 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.197958 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.214272 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b4898d7bb-hk6jf"] Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.214546 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.217836 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzfb\" (UniqueName: \"kubernetes.io/projected/e8f89629-414c-41be-b865-3e20ba737642-kube-api-access-hqzfb\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.217891 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8f89629-414c-41be-b865-3e20ba737642-console-serving-cert\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.217919 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-trusted-ca-bundle\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.217965 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e8f89629-414c-41be-b865-3e20ba737642-console-oauth-config\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.217988 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-oauth-serving-cert\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.218024 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-console-config\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.218057 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-service-ca\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.223679 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:18 crc kubenswrapper[5058]: W1014 07:00:18.251731 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f9ace7e_d57a_4b8c_b9d4_a2cc1a71731b.slice/crio-639c80a728446c181769cf001de98669e3eae8f9244635344105b5326256a8de WatchSource:0}: Error finding container 639c80a728446c181769cf001de98669e3eae8f9244635344105b5326256a8de: Status 404 returned error can't find the container with id 639c80a728446c181769cf001de98669e3eae8f9244635344105b5326256a8de Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.309429 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.319211 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-console-config\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.319728 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-service-ca\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.319770 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzfb\" (UniqueName: \"kubernetes.io/projected/e8f89629-414c-41be-b865-3e20ba737642-kube-api-access-hqzfb\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.319825 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8f89629-414c-41be-b865-3e20ba737642-console-serving-cert\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.319853 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-trusted-ca-bundle\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.319912 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e8f89629-414c-41be-b865-3e20ba737642-console-oauth-config\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.319935 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-oauth-serving-cert\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.320346 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-console-config\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.320663 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-oauth-serving-cert\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.321135 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-service-ca\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.323474 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8f89629-414c-41be-b865-3e20ba737642-trusted-ca-bundle\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.332671 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e8f89629-414c-41be-b865-3e20ba737642-console-oauth-config\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.347678 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzfb\" (UniqueName: \"kubernetes.io/projected/e8f89629-414c-41be-b865-3e20ba737642-kube-api-access-hqzfb\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.370603 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8f89629-414c-41be-b865-3e20ba737642-console-serving-cert\") pod \"console-7b4898d7bb-hk6jf\" (UID: \"e8f89629-414c-41be-b865-3e20ba737642\") " pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.511390 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.653499 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs"] Oct 14 07:00:18 crc kubenswrapper[5058]: W1014 07:00:18.661431 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b1b03ec_3403_4067_91e8_c1255a5e60d9.slice/crio-19fafaa62de5eba1720b3376d8a0b5fa04abb79b238fc48f00d673417c3e4380 WatchSource:0}: Error finding container 19fafaa62de5eba1720b3376d8a0b5fa04abb79b238fc48f00d673417c3e4380: Status 404 returned error can't find the container with id 19fafaa62de5eba1720b3376d8a0b5fa04abb79b238fc48f00d673417c3e4380 Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.710504 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp"] Oct 14 07:00:18 crc kubenswrapper[5058]: W1014 07:00:18.714143 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8db929e_48b8_4330_9dd8_25e8744d8c79.slice/crio-7dd70a4d4ed3e4977cffa2cc7b600d057846f8c8a47d4575e4e66e00f6ddac2e WatchSource:0}: Error finding container 7dd70a4d4ed3e4977cffa2cc7b600d057846f8c8a47d4575e4e66e00f6ddac2e: Status 404 returned error can't find the container with id 7dd70a4d4ed3e4977cffa2cc7b600d057846f8c8a47d4575e4e66e00f6ddac2e Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.806856 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6"] Oct 14 07:00:18 crc kubenswrapper[5058]: W1014 07:00:18.815745 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb2c83c_e257_49a9_8a93_54a6f218da22.slice/crio-dc209917063947c96821aa569129717446ec4a706b535f00dca8bd371e939098 WatchSource:0}: Error finding container dc209917063947c96821aa569129717446ec4a706b535f00dca8bd371e939098: Status 404 returned error can't find the container with id dc209917063947c96821aa569129717446ec4a706b535f00dca8bd371e939098 Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.820863 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.820892 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.867949 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:18 crc kubenswrapper[5058]: I1014 07:00:18.923672 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b4898d7bb-hk6jf"] Oct 14 07:00:18 crc kubenswrapper[5058]: W1014 07:00:18.930657 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8f89629_414c_41be_b865_3e20ba737642.slice/crio-7fbc9c0ae63b457227256c45404435aeb9dc26372a735e9062b38a9e376dbff6 WatchSource:0}: Error finding container 7fbc9c0ae63b457227256c45404435aeb9dc26372a735e9062b38a9e376dbff6: Status 404 returned error can't find the container with id 7fbc9c0ae63b457227256c45404435aeb9dc26372a735e9062b38a9e376dbff6 Oct 14 07:00:19 crc kubenswrapper[5058]: I1014 07:00:19.043849 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b4898d7bb-hk6jf" event={"ID":"e8f89629-414c-41be-b865-3e20ba737642","Type":"ContainerStarted","Data":"7fbc9c0ae63b457227256c45404435aeb9dc26372a735e9062b38a9e376dbff6"} Oct 14 07:00:19 crc kubenswrapper[5058]: I1014 07:00:19.044964 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" event={"ID":"ebb2c83c-e257-49a9-8a93-54a6f218da22","Type":"ContainerStarted","Data":"dc209917063947c96821aa569129717446ec4a706b535f00dca8bd371e939098"} Oct 14 07:00:19 crc kubenswrapper[5058]: I1014 07:00:19.046388 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cxwdw" event={"ID":"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b","Type":"ContainerStarted","Data":"639c80a728446c181769cf001de98669e3eae8f9244635344105b5326256a8de"} Oct 14 07:00:19 crc kubenswrapper[5058]: I1014 07:00:19.047176 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs" event={"ID":"0b1b03ec-3403-4067-91e8-c1255a5e60d9","Type":"ContainerStarted","Data":"19fafaa62de5eba1720b3376d8a0b5fa04abb79b238fc48f00d673417c3e4380"} Oct 14 07:00:19 crc kubenswrapper[5058]: I1014 07:00:19.048744 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" event={"ID":"a8db929e-48b8-4330-9dd8-25e8744d8c79","Type":"ContainerStarted","Data":"7dd70a4d4ed3e4977cffa2cc7b600d057846f8c8a47d4575e4e66e00f6ddac2e"} Oct 14 07:00:19 crc kubenswrapper[5058]: I1014 07:00:19.104493 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:19 crc kubenswrapper[5058]: I1014 07:00:19.698957 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcwj7"] Oct 14 07:00:20 crc kubenswrapper[5058]: I1014 07:00:20.057686 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b4898d7bb-hk6jf" event={"ID":"e8f89629-414c-41be-b865-3e20ba737642","Type":"ContainerStarted","Data":"fde384e5eade1a0c6b622545812c543bb465ecb348be955d9f15bd9a1dec415c"} Oct 14 07:00:21 crc kubenswrapper[5058]: I1014 07:00:21.064314 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bcwj7" podUID="45af6609-b855-4d8a-9e99-38505fade914" containerName="registry-server" containerID="cri-o://269be007a83b9f2d10191cf3d7d1aaa2f0757bb04ec083cec5f01f8e868cec01" gracePeriod=2 Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.080171 5058 generic.go:334] "Generic (PLEG): container finished" podID="45af6609-b855-4d8a-9e99-38505fade914" containerID="269be007a83b9f2d10191cf3d7d1aaa2f0757bb04ec083cec5f01f8e868cec01" exitCode=0 Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.080574 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcwj7" event={"ID":"45af6609-b855-4d8a-9e99-38505fade914","Type":"ContainerDied","Data":"269be007a83b9f2d10191cf3d7d1aaa2f0757bb04ec083cec5f01f8e868cec01"} Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.082307 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs" event={"ID":"0b1b03ec-3403-4067-91e8-c1255a5e60d9","Type":"ContainerStarted","Data":"17a0904bb5ef2a8e3dcfca047d143857469dab0384fcd27a45e65e2132717083"} Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.313190 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.334769 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b4898d7bb-hk6jf" podStartSLOduration=4.334745278 podStartE2EDuration="4.334745278s" podCreationTimestamp="2025-10-14 07:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:00:20.079035413 +0000 UTC m=+767.990119219" watchObservedRunningTime="2025-10-14 07:00:22.334745278 +0000 UTC m=+770.245829104" Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.476204 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-utilities\") pod \"45af6609-b855-4d8a-9e99-38505fade914\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.476307 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbqhr\" (UniqueName: \"kubernetes.io/projected/45af6609-b855-4d8a-9e99-38505fade914-kube-api-access-pbqhr\") pod \"45af6609-b855-4d8a-9e99-38505fade914\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.476356 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-catalog-content\") pod \"45af6609-b855-4d8a-9e99-38505fade914\" (UID: \"45af6609-b855-4d8a-9e99-38505fade914\") " Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.480511 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-utilities" (OuterVolumeSpecName: "utilities") pod "45af6609-b855-4d8a-9e99-38505fade914" (UID: "45af6609-b855-4d8a-9e99-38505fade914"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.483638 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45af6609-b855-4d8a-9e99-38505fade914-kube-api-access-pbqhr" (OuterVolumeSpecName: "kube-api-access-pbqhr") pod "45af6609-b855-4d8a-9e99-38505fade914" (UID: "45af6609-b855-4d8a-9e99-38505fade914"). InnerVolumeSpecName "kube-api-access-pbqhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.578461 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.578524 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbqhr\" (UniqueName: \"kubernetes.io/projected/45af6609-b855-4d8a-9e99-38505fade914-kube-api-access-pbqhr\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.879323 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45af6609-b855-4d8a-9e99-38505fade914" (UID: "45af6609-b855-4d8a-9e99-38505fade914"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:00:22 crc kubenswrapper[5058]: I1014 07:00:22.883541 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45af6609-b855-4d8a-9e99-38505fade914-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.095067 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcwj7" event={"ID":"45af6609-b855-4d8a-9e99-38505fade914","Type":"ContainerDied","Data":"9f68b60970ca28f546b58dce89565687902e111d50e0a9525918975a74a0e378"} Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.095107 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcwj7" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.095151 5058 scope.go:117] "RemoveContainer" containerID="269be007a83b9f2d10191cf3d7d1aaa2f0757bb04ec083cec5f01f8e868cec01" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.096783 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" event={"ID":"a8db929e-48b8-4330-9dd8-25e8744d8c79","Type":"ContainerStarted","Data":"1defdd3857675837c6c9a57510b6d3a28b43aba0b25cff8a8465b25a8c6031c8"} Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.097043 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.100011 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cxwdw" event={"ID":"5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b","Type":"ContainerStarted","Data":"4b8a416669ded250db38a56804e58415b8db7b1efa7b9dadaf714682df088656"} Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.100052 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.108501 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" event={"ID":"ebb2c83c-e257-49a9-8a93-54a6f218da22","Type":"ContainerStarted","Data":"ebdcf1298b3afbee7a8cba893e260c30c7e60feb39d971f529c0d1832f76b845"} Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.114783 5058 scope.go:117] "RemoveContainer" containerID="456d9c12257963aca0374b9bbbbc2d9a385d4ef5dfa0dcd1271c264c0d413935" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.119366 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" podStartSLOduration=2.896054754 podStartE2EDuration="6.119342528s" podCreationTimestamp="2025-10-14 07:00:17 +0000 UTC" firstStartedPulling="2025-10-14 07:00:18.71620241 +0000 UTC m=+766.627286226" lastFinishedPulling="2025-10-14 07:00:21.939490154 +0000 UTC m=+769.850574000" observedRunningTime="2025-10-14 07:00:23.112583147 +0000 UTC m=+771.023666973" watchObservedRunningTime="2025-10-14 07:00:23.119342528 +0000 UTC m=+771.030426334" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.156804 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-9rgr6" podStartSLOduration=3.031819372 podStartE2EDuration="6.156774513s" podCreationTimestamp="2025-10-14 07:00:17 +0000 UTC" firstStartedPulling="2025-10-14 07:00:18.819153173 +0000 UTC m=+766.730236979" lastFinishedPulling="2025-10-14 07:00:21.944108284 +0000 UTC m=+769.855192120" observedRunningTime="2025-10-14 07:00:23.136029208 +0000 UTC m=+771.047113034" watchObservedRunningTime="2025-10-14 07:00:23.156774513 +0000 UTC m=+771.067858319" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.158448 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-cxwdw" podStartSLOduration=2.553918618 podStartE2EDuration="6.15844224s" podCreationTimestamp="2025-10-14 07:00:17 +0000 UTC" firstStartedPulling="2025-10-14 07:00:18.254172824 +0000 UTC m=+766.165256630" lastFinishedPulling="2025-10-14 07:00:21.858696436 +0000 UTC m=+769.769780252" observedRunningTime="2025-10-14 07:00:23.158384339 +0000 UTC m=+771.069468155" watchObservedRunningTime="2025-10-14 07:00:23.15844224 +0000 UTC m=+771.069526046" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.192040 5058 scope.go:117] "RemoveContainer" containerID="0ea97533cb8ed8502463dc34fc8bb11e06834379c2ed89bd501f5b5122e7b921" Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.196349 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcwj7"] Oct 14 07:00:23 crc kubenswrapper[5058]: I1014 07:00:23.209434 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bcwj7"] Oct 14 07:00:24 crc kubenswrapper[5058]: I1014 07:00:24.799817 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45af6609-b855-4d8a-9e99-38505fade914" path="/var/lib/kubelet/pods/45af6609-b855-4d8a-9e99-38505fade914/volumes" Oct 14 07:00:25 crc kubenswrapper[5058]: I1014 07:00:25.139910 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs" event={"ID":"0b1b03ec-3403-4067-91e8-c1255a5e60d9","Type":"ContainerStarted","Data":"c2f1601ca70520fa29ee28a0f9d4dc4c70e70ca65fe5a851c8bef7b1e8b7fc80"} Oct 14 07:00:25 crc kubenswrapper[5058]: I1014 07:00:25.161027 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-rl5gs" podStartSLOduration=2.730380753 podStartE2EDuration="8.160992538s" podCreationTimestamp="2025-10-14 07:00:17 +0000 UTC" firstStartedPulling="2025-10-14 07:00:18.664010069 +0000 UTC m=+766.575093885" lastFinishedPulling="2025-10-14 07:00:24.094621864 +0000 UTC m=+772.005705670" observedRunningTime="2025-10-14 07:00:25.159729142 +0000 UTC m=+773.070812988" watchObservedRunningTime="2025-10-14 07:00:25.160992538 +0000 UTC m=+773.072076374" Oct 14 07:00:28 crc kubenswrapper[5058]: I1014 07:00:28.257981 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-cxwdw" Oct 14 07:00:28 crc kubenswrapper[5058]: I1014 07:00:28.512177 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:28 crc kubenswrapper[5058]: I1014 07:00:28.512245 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:28 crc kubenswrapper[5058]: I1014 07:00:28.518926 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.016820 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8xdkf"] Oct 14 07:00:29 crc kubenswrapper[5058]: E1014 07:00:29.017956 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45af6609-b855-4d8a-9e99-38505fade914" containerName="extract-content" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.018198 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="45af6609-b855-4d8a-9e99-38505fade914" containerName="extract-content" Oct 14 07:00:29 crc kubenswrapper[5058]: E1014 07:00:29.018436 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45af6609-b855-4d8a-9e99-38505fade914" containerName="registry-server" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.018577 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="45af6609-b855-4d8a-9e99-38505fade914" containerName="registry-server" Oct 14 07:00:29 crc kubenswrapper[5058]: E1014 07:00:29.018683 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45af6609-b855-4d8a-9e99-38505fade914" containerName="extract-utilities" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.018788 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="45af6609-b855-4d8a-9e99-38505fade914" containerName="extract-utilities" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.019127 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="45af6609-b855-4d8a-9e99-38505fade914" containerName="registry-server" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.029494 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.035748 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xdkf"] Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.170002 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b4898d7bb-hk6jf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.177357 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-utilities\") pod \"certified-operators-8xdkf\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.177435 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-catalog-content\") pod \"certified-operators-8xdkf\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.177505 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhks7\" (UniqueName: \"kubernetes.io/projected/5af8c1be-3e6b-4938-b636-b4b555cc853f-kube-api-access-hhks7\") pod \"certified-operators-8xdkf\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.228755 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fhbms"] Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.279126 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-utilities\") pod \"certified-operators-8xdkf\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.279184 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-catalog-content\") pod \"certified-operators-8xdkf\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.279224 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhks7\" (UniqueName: \"kubernetes.io/projected/5af8c1be-3e6b-4938-b636-b4b555cc853f-kube-api-access-hhks7\") pod \"certified-operators-8xdkf\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.281448 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-utilities\") pod \"certified-operators-8xdkf\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.281764 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-catalog-content\") pod \"certified-operators-8xdkf\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.304453 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhks7\" (UniqueName: \"kubernetes.io/projected/5af8c1be-3e6b-4938-b636-b4b555cc853f-kube-api-access-hhks7\") pod \"certified-operators-8xdkf\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.365521 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:29 crc kubenswrapper[5058]: I1014 07:00:29.855655 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xdkf"] Oct 14 07:00:30 crc kubenswrapper[5058]: I1014 07:00:30.173654 5058 generic.go:334] "Generic (PLEG): container finished" podID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerID="922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f" exitCode=0 Oct 14 07:00:30 crc kubenswrapper[5058]: I1014 07:00:30.173771 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xdkf" event={"ID":"5af8c1be-3e6b-4938-b636-b4b555cc853f","Type":"ContainerDied","Data":"922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f"} Oct 14 07:00:30 crc kubenswrapper[5058]: I1014 07:00:30.173918 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xdkf" event={"ID":"5af8c1be-3e6b-4938-b636-b4b555cc853f","Type":"ContainerStarted","Data":"cb97f7f93f3e7baf3cf4320bfe9a3b237896f54188cb4d8e56b5f1342db3260a"} Oct 14 07:00:31 crc kubenswrapper[5058]: I1014 07:00:31.187375 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xdkf" event={"ID":"5af8c1be-3e6b-4938-b636-b4b555cc853f","Type":"ContainerStarted","Data":"a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638"} Oct 14 07:00:32 crc kubenswrapper[5058]: I1014 07:00:32.197773 5058 generic.go:334] "Generic (PLEG): container finished" podID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerID="a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638" exitCode=0 Oct 14 07:00:32 crc kubenswrapper[5058]: I1014 07:00:32.197871 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xdkf" event={"ID":"5af8c1be-3e6b-4938-b636-b4b555cc853f","Type":"ContainerDied","Data":"a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638"} Oct 14 07:00:33 crc kubenswrapper[5058]: I1014 07:00:33.214277 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xdkf" event={"ID":"5af8c1be-3e6b-4938-b636-b4b555cc853f","Type":"ContainerStarted","Data":"50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97"} Oct 14 07:00:33 crc kubenswrapper[5058]: I1014 07:00:33.245370 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8xdkf" podStartSLOduration=2.713359475 podStartE2EDuration="5.245345529s" podCreationTimestamp="2025-10-14 07:00:28 +0000 UTC" firstStartedPulling="2025-10-14 07:00:30.175529992 +0000 UTC m=+778.086613798" lastFinishedPulling="2025-10-14 07:00:32.707516006 +0000 UTC m=+780.618599852" observedRunningTime="2025-10-14 07:00:33.238808795 +0000 UTC m=+781.149892611" watchObservedRunningTime="2025-10-14 07:00:33.245345529 +0000 UTC m=+781.156429365" Oct 14 07:00:38 crc kubenswrapper[5058]: I1014 07:00:38.222911 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pnfcp" Oct 14 07:00:39 crc kubenswrapper[5058]: I1014 07:00:39.365693 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:39 crc kubenswrapper[5058]: I1014 07:00:39.365760 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:39 crc kubenswrapper[5058]: I1014 07:00:39.440261 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:40 crc kubenswrapper[5058]: I1014 07:00:40.312633 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:40 crc kubenswrapper[5058]: I1014 07:00:40.366235 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xdkf"] Oct 14 07:00:42 crc kubenswrapper[5058]: I1014 07:00:42.275174 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8xdkf" podUID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerName="registry-server" containerID="cri-o://50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97" gracePeriod=2 Oct 14 07:00:42 crc kubenswrapper[5058]: I1014 07:00:42.770227 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:42 crc kubenswrapper[5058]: I1014 07:00:42.922747 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhks7\" (UniqueName: \"kubernetes.io/projected/5af8c1be-3e6b-4938-b636-b4b555cc853f-kube-api-access-hhks7\") pod \"5af8c1be-3e6b-4938-b636-b4b555cc853f\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " Oct 14 07:00:42 crc kubenswrapper[5058]: I1014 07:00:42.922863 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-utilities\") pod \"5af8c1be-3e6b-4938-b636-b4b555cc853f\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " Oct 14 07:00:42 crc kubenswrapper[5058]: I1014 07:00:42.922897 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-catalog-content\") pod \"5af8c1be-3e6b-4938-b636-b4b555cc853f\" (UID: \"5af8c1be-3e6b-4938-b636-b4b555cc853f\") " Oct 14 07:00:42 crc kubenswrapper[5058]: I1014 07:00:42.924418 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-utilities" (OuterVolumeSpecName: "utilities") pod "5af8c1be-3e6b-4938-b636-b4b555cc853f" (UID: "5af8c1be-3e6b-4938-b636-b4b555cc853f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:00:42 crc kubenswrapper[5058]: I1014 07:00:42.932096 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af8c1be-3e6b-4938-b636-b4b555cc853f-kube-api-access-hhks7" (OuterVolumeSpecName: "kube-api-access-hhks7") pod "5af8c1be-3e6b-4938-b636-b4b555cc853f" (UID: "5af8c1be-3e6b-4938-b636-b4b555cc853f"). InnerVolumeSpecName "kube-api-access-hhks7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.011700 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5af8c1be-3e6b-4938-b636-b4b555cc853f" (UID: "5af8c1be-3e6b-4938-b636-b4b555cc853f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.024193 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhks7\" (UniqueName: \"kubernetes.io/projected/5af8c1be-3e6b-4938-b636-b4b555cc853f-kube-api-access-hhks7\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.024554 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.024580 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af8c1be-3e6b-4938-b636-b4b555cc853f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.284316 5058 generic.go:334] "Generic (PLEG): container finished" podID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerID="50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97" exitCode=0 Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.284374 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xdkf" event={"ID":"5af8c1be-3e6b-4938-b636-b4b555cc853f","Type":"ContainerDied","Data":"50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97"} Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.284412 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xdkf" event={"ID":"5af8c1be-3e6b-4938-b636-b4b555cc853f","Type":"ContainerDied","Data":"cb97f7f93f3e7baf3cf4320bfe9a3b237896f54188cb4d8e56b5f1342db3260a"} Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.284413 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xdkf" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.284440 5058 scope.go:117] "RemoveContainer" containerID="50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.312898 5058 scope.go:117] "RemoveContainer" containerID="a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.334906 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xdkf"] Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.337644 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8xdkf"] Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.353014 5058 scope.go:117] "RemoveContainer" containerID="922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.371452 5058 scope.go:117] "RemoveContainer" containerID="50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97" Oct 14 07:00:43 crc kubenswrapper[5058]: E1014 07:00:43.372075 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97\": container with ID starting with 50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97 not found: ID does not exist" containerID="50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.372136 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97"} err="failed to get container status \"50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97\": rpc error: code = NotFound desc = could not find container \"50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97\": container with ID starting with 50d6b4e791038b259e7c2591ca803dbb1157817dc312178868f85cc4effbdc97 not found: ID does not exist" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.372175 5058 scope.go:117] "RemoveContainer" containerID="a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638" Oct 14 07:00:43 crc kubenswrapper[5058]: E1014 07:00:43.372570 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638\": container with ID starting with a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638 not found: ID does not exist" containerID="a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.372713 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638"} err="failed to get container status \"a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638\": rpc error: code = NotFound desc = could not find container \"a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638\": container with ID starting with a81f3e06d474dd124b0f89e1cba600e16c190f0f36d51a408fcf46136e0bd638 not found: ID does not exist" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.372914 5058 scope.go:117] "RemoveContainer" containerID="922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f" Oct 14 07:00:43 crc kubenswrapper[5058]: E1014 07:00:43.377279 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f\": container with ID starting with 922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f not found: ID does not exist" containerID="922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f" Oct 14 07:00:43 crc kubenswrapper[5058]: I1014 07:00:43.377326 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f"} err="failed to get container status \"922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f\": rpc error: code = NotFound desc = could not find container \"922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f\": container with ID starting with 922ffe5e8af4b1a44189ec15ee1fe4ab2f99f450bcdfb73c1430b422adeecd2f not found: ID does not exist" Oct 14 07:00:44 crc kubenswrapper[5058]: I1014 07:00:44.801200 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af8c1be-3e6b-4938-b636-b4b555cc853f" path="/var/lib/kubelet/pods/5af8c1be-3e6b-4938-b636-b4b555cc853f/volumes" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.295542 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fhbms" podUID="f05623c8-3bf6-44d3-a522-cb1175227bf4" containerName="console" containerID="cri-o://b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5" gracePeriod=15 Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.612476 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k"] Oct 14 07:00:54 crc kubenswrapper[5058]: E1014 07:00:54.612898 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerName="extract-content" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.612908 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerName="extract-content" Oct 14 07:00:54 crc kubenswrapper[5058]: E1014 07:00:54.612921 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerName="registry-server" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.612926 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerName="registry-server" Oct 14 07:00:54 crc kubenswrapper[5058]: E1014 07:00:54.612937 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerName="extract-utilities" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.612944 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerName="extract-utilities" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.613026 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af8c1be-3e6b-4938-b636-b4b555cc853f" containerName="registry-server" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.615225 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.620570 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k"] Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.621522 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.643742 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.643860 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.643909 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78k6h\" (UniqueName: \"kubernetes.io/projected/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-kube-api-access-78k6h\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.706808 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fhbms_f05623c8-3bf6-44d3-a522-cb1175227bf4/console/0.log" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.706867 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fhbms" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.744907 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-trusted-ca-bundle\") pod \"f05623c8-3bf6-44d3-a522-cb1175227bf4\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.744983 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-oauth-serving-cert\") pod \"f05623c8-3bf6-44d3-a522-cb1175227bf4\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.745033 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-service-ca\") pod \"f05623c8-3bf6-44d3-a522-cb1175227bf4\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.745115 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-config\") pod \"f05623c8-3bf6-44d3-a522-cb1175227bf4\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746019 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f05623c8-3bf6-44d3-a522-cb1175227bf4" (UID: "f05623c8-3bf6-44d3-a522-cb1175227bf4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746067 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-oauth-config\") pod \"f05623c8-3bf6-44d3-a522-cb1175227bf4\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746077 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-service-ca" (OuterVolumeSpecName: "service-ca") pod "f05623c8-3bf6-44d3-a522-cb1175227bf4" (UID: "f05623c8-3bf6-44d3-a522-cb1175227bf4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746119 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-serving-cert\") pod \"f05623c8-3bf6-44d3-a522-cb1175227bf4\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746160 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvp67\" (UniqueName: \"kubernetes.io/projected/f05623c8-3bf6-44d3-a522-cb1175227bf4-kube-api-access-nvp67\") pod \"f05623c8-3bf6-44d3-a522-cb1175227bf4\" (UID: \"f05623c8-3bf6-44d3-a522-cb1175227bf4\") " Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746163 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f05623c8-3bf6-44d3-a522-cb1175227bf4" (UID: "f05623c8-3bf6-44d3-a522-cb1175227bf4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746198 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-config" (OuterVolumeSpecName: "console-config") pod "f05623c8-3bf6-44d3-a522-cb1175227bf4" (UID: "f05623c8-3bf6-44d3-a522-cb1175227bf4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746390 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746614 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746703 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78k6h\" (UniqueName: \"kubernetes.io/projected/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-kube-api-access-78k6h\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746788 5058 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746837 5058 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746852 5058 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.746867 5058 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f05623c8-3bf6-44d3-a522-cb1175227bf4-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.747239 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.747350 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.755995 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f05623c8-3bf6-44d3-a522-cb1175227bf4" (UID: "f05623c8-3bf6-44d3-a522-cb1175227bf4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.758578 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f05623c8-3bf6-44d3-a522-cb1175227bf4-kube-api-access-nvp67" (OuterVolumeSpecName: "kube-api-access-nvp67") pod "f05623c8-3bf6-44d3-a522-cb1175227bf4" (UID: "f05623c8-3bf6-44d3-a522-cb1175227bf4"). InnerVolumeSpecName "kube-api-access-nvp67". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.758834 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f05623c8-3bf6-44d3-a522-cb1175227bf4" (UID: "f05623c8-3bf6-44d3-a522-cb1175227bf4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.765673 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78k6h\" (UniqueName: \"kubernetes.io/projected/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-kube-api-access-78k6h\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.848081 5058 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.848211 5058 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f05623c8-3bf6-44d3-a522-cb1175227bf4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.848268 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvp67\" (UniqueName: \"kubernetes.io/projected/f05623c8-3bf6-44d3-a522-cb1175227bf4-kube-api-access-nvp67\") on node \"crc\" DevicePath \"\"" Oct 14 07:00:54 crc kubenswrapper[5058]: I1014 07:00:54.936157 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.384335 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fhbms_f05623c8-3bf6-44d3-a522-cb1175227bf4/console/0.log" Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.384430 5058 generic.go:334] "Generic (PLEG): container finished" podID="f05623c8-3bf6-44d3-a522-cb1175227bf4" containerID="b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5" exitCode=2 Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.384486 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fhbms" event={"ID":"f05623c8-3bf6-44d3-a522-cb1175227bf4","Type":"ContainerDied","Data":"b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5"} Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.384545 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fhbms" Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.384569 5058 scope.go:117] "RemoveContainer" containerID="b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5" Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.384548 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fhbms" event={"ID":"f05623c8-3bf6-44d3-a522-cb1175227bf4","Type":"ContainerDied","Data":"7db196072167af6f2bd8f8769180da8684c23881b9483070028b91e0a2691978"} Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.416143 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k"] Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.419067 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fhbms"] Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.429601 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fhbms"] Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.436087 5058 scope.go:117] "RemoveContainer" containerID="b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5" Oct 14 07:00:55 crc kubenswrapper[5058]: E1014 07:00:55.436993 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5\": container with ID starting with b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5 not found: ID does not exist" containerID="b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5" Oct 14 07:00:55 crc kubenswrapper[5058]: I1014 07:00:55.437081 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5"} err="failed to get container status \"b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5\": rpc error: code = NotFound desc = could not find container \"b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5\": container with ID starting with b966ab49d6b96f0b885a57a4184a5de3a0d2bd61237d670f55b15baa70ae49c5 not found: ID does not exist" Oct 14 07:00:56 crc kubenswrapper[5058]: I1014 07:00:56.391107 5058 generic.go:334] "Generic (PLEG): container finished" podID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerID="87f4654ac5308d090c0220239d46e0eef1a0b5207bcffaf6756b38cca912f6af" exitCode=0 Oct 14 07:00:56 crc kubenswrapper[5058]: I1014 07:00:56.392294 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" event={"ID":"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e","Type":"ContainerDied","Data":"87f4654ac5308d090c0220239d46e0eef1a0b5207bcffaf6756b38cca912f6af"} Oct 14 07:00:56 crc kubenswrapper[5058]: I1014 07:00:56.393375 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" event={"ID":"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e","Type":"ContainerStarted","Data":"26398bb161dae4783f53c80bfc92711b8773776bc5e57525db13f1c7ef682bc3"} Oct 14 07:00:56 crc kubenswrapper[5058]: I1014 07:00:56.804736 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f05623c8-3bf6-44d3-a522-cb1175227bf4" path="/var/lib/kubelet/pods/f05623c8-3bf6-44d3-a522-cb1175227bf4/volumes" Oct 14 07:00:58 crc kubenswrapper[5058]: I1014 07:00:58.409819 5058 generic.go:334] "Generic (PLEG): container finished" podID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerID="f93a2c5f3fbac3679dc1d0b993bcc9a2c5b3525a004b7c7ba47ced136ca7ef3a" exitCode=0 Oct 14 07:00:58 crc kubenswrapper[5058]: I1014 07:00:58.409892 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" event={"ID":"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e","Type":"ContainerDied","Data":"f93a2c5f3fbac3679dc1d0b993bcc9a2c5b3525a004b7c7ba47ced136ca7ef3a"} Oct 14 07:00:59 crc kubenswrapper[5058]: I1014 07:00:59.421887 5058 generic.go:334] "Generic (PLEG): container finished" podID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerID="afe5c809d5d891ddcaafecc128ef8b2a91b4a2161b7707a539eb4ab418334d17" exitCode=0 Oct 14 07:00:59 crc kubenswrapper[5058]: I1014 07:00:59.421942 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" event={"ID":"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e","Type":"ContainerDied","Data":"afe5c809d5d891ddcaafecc128ef8b2a91b4a2161b7707a539eb4ab418334d17"} Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.788575 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.835216 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-util\") pod \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.835323 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78k6h\" (UniqueName: \"kubernetes.io/projected/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-kube-api-access-78k6h\") pod \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.835391 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-bundle\") pod \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\" (UID: \"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e\") " Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.836753 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-bundle" (OuterVolumeSpecName: "bundle") pod "ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" (UID: "ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.844149 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-kube-api-access-78k6h" (OuterVolumeSpecName: "kube-api-access-78k6h") pod "ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" (UID: "ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e"). InnerVolumeSpecName "kube-api-access-78k6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.859404 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-util" (OuterVolumeSpecName: "util") pod "ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" (UID: "ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.935927 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78k6h\" (UniqueName: \"kubernetes.io/projected/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-kube-api-access-78k6h\") on node \"crc\" DevicePath \"\"" Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.935960 5058 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:01:00 crc kubenswrapper[5058]: I1014 07:01:00.935971 5058 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e-util\") on node \"crc\" DevicePath \"\"" Oct 14 07:01:01 crc kubenswrapper[5058]: I1014 07:01:01.438482 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" event={"ID":"ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e","Type":"ContainerDied","Data":"26398bb161dae4783f53c80bfc92711b8773776bc5e57525db13f1c7ef682bc3"} Oct 14 07:01:01 crc kubenswrapper[5058]: I1014 07:01:01.438526 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26398bb161dae4783f53c80bfc92711b8773776bc5e57525db13f1c7ef682bc3" Oct 14 07:01:01 crc kubenswrapper[5058]: I1014 07:01:01.438596 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.430580 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn"] Oct 14 07:01:11 crc kubenswrapper[5058]: E1014 07:01:11.431230 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05623c8-3bf6-44d3-a522-cb1175227bf4" containerName="console" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.431241 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05623c8-3bf6-44d3-a522-cb1175227bf4" containerName="console" Oct 14 07:01:11 crc kubenswrapper[5058]: E1014 07:01:11.431256 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerName="pull" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.431262 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerName="pull" Oct 14 07:01:11 crc kubenswrapper[5058]: E1014 07:01:11.431274 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerName="extract" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.431280 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerName="extract" Oct 14 07:01:11 crc kubenswrapper[5058]: E1014 07:01:11.431295 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerName="util" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.431300 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerName="util" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.431382 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05623c8-3bf6-44d3-a522-cb1175227bf4" containerName="console" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.431398 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e" containerName="extract" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.431766 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.436381 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.436849 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-wwq9r" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.437032 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.437500 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.437635 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.445287 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn"] Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.569613 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn55g\" (UniqueName: \"kubernetes.io/projected/4ae5e206-2357-4e79-a812-9a6a6b3be856-kube-api-access-fn55g\") pod \"metallb-operator-controller-manager-59bb547746-xxnjn\" (UID: \"4ae5e206-2357-4e79-a812-9a6a6b3be856\") " pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.569675 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ae5e206-2357-4e79-a812-9a6a6b3be856-apiservice-cert\") pod \"metallb-operator-controller-manager-59bb547746-xxnjn\" (UID: \"4ae5e206-2357-4e79-a812-9a6a6b3be856\") " pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.569845 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ae5e206-2357-4e79-a812-9a6a6b3be856-webhook-cert\") pod \"metallb-operator-controller-manager-59bb547746-xxnjn\" (UID: \"4ae5e206-2357-4e79-a812-9a6a6b3be856\") " pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.670358 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn55g\" (UniqueName: \"kubernetes.io/projected/4ae5e206-2357-4e79-a812-9a6a6b3be856-kube-api-access-fn55g\") pod \"metallb-operator-controller-manager-59bb547746-xxnjn\" (UID: \"4ae5e206-2357-4e79-a812-9a6a6b3be856\") " pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.670403 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ae5e206-2357-4e79-a812-9a6a6b3be856-apiservice-cert\") pod \"metallb-operator-controller-manager-59bb547746-xxnjn\" (UID: \"4ae5e206-2357-4e79-a812-9a6a6b3be856\") " pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.670449 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ae5e206-2357-4e79-a812-9a6a6b3be856-webhook-cert\") pod \"metallb-operator-controller-manager-59bb547746-xxnjn\" (UID: \"4ae5e206-2357-4e79-a812-9a6a6b3be856\") " pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.680480 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ae5e206-2357-4e79-a812-9a6a6b3be856-apiservice-cert\") pod \"metallb-operator-controller-manager-59bb547746-xxnjn\" (UID: \"4ae5e206-2357-4e79-a812-9a6a6b3be856\") " pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.683333 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ae5e206-2357-4e79-a812-9a6a6b3be856-webhook-cert\") pod \"metallb-operator-controller-manager-59bb547746-xxnjn\" (UID: \"4ae5e206-2357-4e79-a812-9a6a6b3be856\") " pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.693949 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn55g\" (UniqueName: \"kubernetes.io/projected/4ae5e206-2357-4e79-a812-9a6a6b3be856-kube-api-access-fn55g\") pod \"metallb-operator-controller-manager-59bb547746-xxnjn\" (UID: \"4ae5e206-2357-4e79-a812-9a6a6b3be856\") " pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.748212 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.773349 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85"] Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.774672 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.777203 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.777497 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-kfqkt" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.777728 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.803359 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85"] Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.973288 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mprb\" (UniqueName: \"kubernetes.io/projected/43ee0dfb-b680-485a-b3de-6102c41429f6-kube-api-access-9mprb\") pod \"metallb-operator-webhook-server-bf9c89b58-t7r85\" (UID: \"43ee0dfb-b680-485a-b3de-6102c41429f6\") " pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.973362 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43ee0dfb-b680-485a-b3de-6102c41429f6-apiservice-cert\") pod \"metallb-operator-webhook-server-bf9c89b58-t7r85\" (UID: \"43ee0dfb-b680-485a-b3de-6102c41429f6\") " pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:11 crc kubenswrapper[5058]: I1014 07:01:11.973403 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43ee0dfb-b680-485a-b3de-6102c41429f6-webhook-cert\") pod \"metallb-operator-webhook-server-bf9c89b58-t7r85\" (UID: \"43ee0dfb-b680-485a-b3de-6102c41429f6\") " pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.004484 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn"] Oct 14 07:01:12 crc kubenswrapper[5058]: W1014 07:01:12.014035 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ae5e206_2357_4e79_a812_9a6a6b3be856.slice/crio-8cacebf924906132318096701a6cb9675d5c664b5248094ad08449b99298c6fc WatchSource:0}: Error finding container 8cacebf924906132318096701a6cb9675d5c664b5248094ad08449b99298c6fc: Status 404 returned error can't find the container with id 8cacebf924906132318096701a6cb9675d5c664b5248094ad08449b99298c6fc Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.074378 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43ee0dfb-b680-485a-b3de-6102c41429f6-apiservice-cert\") pod \"metallb-operator-webhook-server-bf9c89b58-t7r85\" (UID: \"43ee0dfb-b680-485a-b3de-6102c41429f6\") " pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.074450 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43ee0dfb-b680-485a-b3de-6102c41429f6-webhook-cert\") pod \"metallb-operator-webhook-server-bf9c89b58-t7r85\" (UID: \"43ee0dfb-b680-485a-b3de-6102c41429f6\") " pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.074501 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mprb\" (UniqueName: \"kubernetes.io/projected/43ee0dfb-b680-485a-b3de-6102c41429f6-kube-api-access-9mprb\") pod \"metallb-operator-webhook-server-bf9c89b58-t7r85\" (UID: \"43ee0dfb-b680-485a-b3de-6102c41429f6\") " pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.078760 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43ee0dfb-b680-485a-b3de-6102c41429f6-apiservice-cert\") pod \"metallb-operator-webhook-server-bf9c89b58-t7r85\" (UID: \"43ee0dfb-b680-485a-b3de-6102c41429f6\") " pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.082437 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43ee0dfb-b680-485a-b3de-6102c41429f6-webhook-cert\") pod \"metallb-operator-webhook-server-bf9c89b58-t7r85\" (UID: \"43ee0dfb-b680-485a-b3de-6102c41429f6\") " pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.092938 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mprb\" (UniqueName: \"kubernetes.io/projected/43ee0dfb-b680-485a-b3de-6102c41429f6-kube-api-access-9mprb\") pod \"metallb-operator-webhook-server-bf9c89b58-t7r85\" (UID: \"43ee0dfb-b680-485a-b3de-6102c41429f6\") " pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.130269 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.504458 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" event={"ID":"4ae5e206-2357-4e79-a812-9a6a6b3be856","Type":"ContainerStarted","Data":"8cacebf924906132318096701a6cb9675d5c664b5248094ad08449b99298c6fc"} Oct 14 07:01:12 crc kubenswrapper[5058]: I1014 07:01:12.640124 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85"] Oct 14 07:01:13 crc kubenswrapper[5058]: I1014 07:01:13.510704 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" event={"ID":"43ee0dfb-b680-485a-b3de-6102c41429f6","Type":"ContainerStarted","Data":"9b8bea3a4addbb0fa59726e3d1e9f0030c70b276e47a48dacea9dbfcb432eb8c"} Oct 14 07:01:15 crc kubenswrapper[5058]: I1014 07:01:15.525472 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" event={"ID":"4ae5e206-2357-4e79-a812-9a6a6b3be856","Type":"ContainerStarted","Data":"ca59e58b4569c6ac4ceaf5f30c7a42d6718c53db261e514a3d9f16c95c8710d7"} Oct 14 07:01:15 crc kubenswrapper[5058]: I1014 07:01:15.527594 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:15 crc kubenswrapper[5058]: I1014 07:01:15.549975 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" podStartSLOduration=1.5176215800000001 podStartE2EDuration="4.54995357s" podCreationTimestamp="2025-10-14 07:01:11 +0000 UTC" firstStartedPulling="2025-10-14 07:01:12.016782519 +0000 UTC m=+819.927866325" lastFinishedPulling="2025-10-14 07:01:15.049114509 +0000 UTC m=+822.960198315" observedRunningTime="2025-10-14 07:01:15.548704134 +0000 UTC m=+823.459787980" watchObservedRunningTime="2025-10-14 07:01:15.54995357 +0000 UTC m=+823.461037386" Oct 14 07:01:17 crc kubenswrapper[5058]: I1014 07:01:17.539238 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" event={"ID":"43ee0dfb-b680-485a-b3de-6102c41429f6","Type":"ContainerStarted","Data":"508e870309640a9cd67a0fa788d50f95044dc018cf937dac41d865e3175d29f9"} Oct 14 07:01:17 crc kubenswrapper[5058]: I1014 07:01:17.566266 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" podStartSLOduration=2.081412554 podStartE2EDuration="6.566243815s" podCreationTimestamp="2025-10-14 07:01:11 +0000 UTC" firstStartedPulling="2025-10-14 07:01:12.649269691 +0000 UTC m=+820.560353497" lastFinishedPulling="2025-10-14 07:01:17.134100952 +0000 UTC m=+825.045184758" observedRunningTime="2025-10-14 07:01:17.561538733 +0000 UTC m=+825.472622579" watchObservedRunningTime="2025-10-14 07:01:17.566243815 +0000 UTC m=+825.477327651" Oct 14 07:01:18 crc kubenswrapper[5058]: I1014 07:01:18.547770 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:32 crc kubenswrapper[5058]: I1014 07:01:32.142241 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-bf9c89b58-t7r85" Oct 14 07:01:51 crc kubenswrapper[5058]: I1014 07:01:51.751049 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-59bb547746-xxnjn" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.507492 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t4dz4"] Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.509922 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.512140 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.512157 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.515118 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-b2wkf" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.526129 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr"] Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.531697 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.548168 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.560557 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr"] Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.610472 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-79nbt"] Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.611274 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.613647 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.613776 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.614141 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-prp96" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.614879 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.624439 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-5l2rj"] Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.625454 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.628415 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.641245 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-5l2rj"] Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657658 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xffx\" (UniqueName: \"kubernetes.io/projected/dbd04243-c4a2-469c-8479-3de6c521fa51-kube-api-access-4xffx\") pod \"controller-68d546b9d8-5l2rj\" (UID: \"dbd04243-c4a2-469c-8479-3de6c521fa51\") " pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657699 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-metrics-certs\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657728 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/457eef4c-39c2-49a3-a77d-b41938e270de-frr-startup\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657749 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-frr-conf\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657765 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/457eef4c-39c2-49a3-a77d-b41938e270de-metrics-certs\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657785 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbd04243-c4a2-469c-8479-3de6c521fa51-cert\") pod \"controller-68d546b9d8-5l2rj\" (UID: \"dbd04243-c4a2-469c-8479-3de6c521fa51\") " pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657817 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-frr-sockets\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657834 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-metallb-excludel2\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657854 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxzhc\" (UniqueName: \"kubernetes.io/projected/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-kube-api-access-wxzhc\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657876 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-metrics\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.657889 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-memberlist\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.659253 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-reloader\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.659352 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbd04243-c4a2-469c-8479-3de6c521fa51-metrics-certs\") pod \"controller-68d546b9d8-5l2rj\" (UID: \"dbd04243-c4a2-469c-8479-3de6c521fa51\") " pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.659394 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474bm\" (UniqueName: \"kubernetes.io/projected/f5b555e5-444c-4c58-976e-0c041f7e5bba-kube-api-access-474bm\") pod \"frr-k8s-webhook-server-64bf5d555-ggpvr\" (UID: \"f5b555e5-444c-4c58-976e-0c041f7e5bba\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.659471 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmvk\" (UniqueName: \"kubernetes.io/projected/457eef4c-39c2-49a3-a77d-b41938e270de-kube-api-access-nmmvk\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.659496 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5b555e5-444c-4c58-976e-0c041f7e5bba-cert\") pod \"frr-k8s-webhook-server-64bf5d555-ggpvr\" (UID: \"f5b555e5-444c-4c58-976e-0c041f7e5bba\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760467 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmvk\" (UniqueName: \"kubernetes.io/projected/457eef4c-39c2-49a3-a77d-b41938e270de-kube-api-access-nmmvk\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760515 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5b555e5-444c-4c58-976e-0c041f7e5bba-cert\") pod \"frr-k8s-webhook-server-64bf5d555-ggpvr\" (UID: \"f5b555e5-444c-4c58-976e-0c041f7e5bba\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760540 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xffx\" (UniqueName: \"kubernetes.io/projected/dbd04243-c4a2-469c-8479-3de6c521fa51-kube-api-access-4xffx\") pod \"controller-68d546b9d8-5l2rj\" (UID: \"dbd04243-c4a2-469c-8479-3de6c521fa51\") " pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760594 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-metrics-certs\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760623 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/457eef4c-39c2-49a3-a77d-b41938e270de-frr-startup\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760675 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-frr-conf\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760700 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/457eef4c-39c2-49a3-a77d-b41938e270de-metrics-certs\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760767 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbd04243-c4a2-469c-8479-3de6c521fa51-cert\") pod \"controller-68d546b9d8-5l2rj\" (UID: \"dbd04243-c4a2-469c-8479-3de6c521fa51\") " pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760789 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-frr-sockets\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760852 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-metallb-excludel2\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760879 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxzhc\" (UniqueName: \"kubernetes.io/projected/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-kube-api-access-wxzhc\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760932 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-metrics\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.760952 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-memberlist\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.761018 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-reloader\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.761084 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbd04243-c4a2-469c-8479-3de6c521fa51-metrics-certs\") pod \"controller-68d546b9d8-5l2rj\" (UID: \"dbd04243-c4a2-469c-8479-3de6c521fa51\") " pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.761114 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-474bm\" (UniqueName: \"kubernetes.io/projected/f5b555e5-444c-4c58-976e-0c041f7e5bba-kube-api-access-474bm\") pod \"frr-k8s-webhook-server-64bf5d555-ggpvr\" (UID: \"f5b555e5-444c-4c58-976e-0c041f7e5bba\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:01:52 crc kubenswrapper[5058]: E1014 07:01:52.762141 5058 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 14 07:01:52 crc kubenswrapper[5058]: E1014 07:01:52.762223 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5b555e5-444c-4c58-976e-0c041f7e5bba-cert podName:f5b555e5-444c-4c58-976e-0c041f7e5bba nodeName:}" failed. No retries permitted until 2025-10-14 07:01:53.262206303 +0000 UTC m=+861.173290129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5b555e5-444c-4c58-976e-0c041f7e5bba-cert") pod "frr-k8s-webhook-server-64bf5d555-ggpvr" (UID: "f5b555e5-444c-4c58-976e-0c041f7e5bba") : secret "frr-k8s-webhook-server-cert" not found Oct 14 07:01:52 crc kubenswrapper[5058]: E1014 07:01:52.763699 5058 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Oct 14 07:01:52 crc kubenswrapper[5058]: E1014 07:01:52.763840 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-metrics-certs podName:dbf554ea-ea3c-4407-8f15-6a8c564d81e7 nodeName:}" failed. No retries permitted until 2025-10-14 07:01:53.263827409 +0000 UTC m=+861.174911225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-metrics-certs") pod "speaker-79nbt" (UID: "dbf554ea-ea3c-4407-8f15-6a8c564d81e7") : secret "speaker-certs-secret" not found Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.766439 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/457eef4c-39c2-49a3-a77d-b41938e270de-frr-startup\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.766730 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-frr-conf\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: E1014 07:01:52.768088 5058 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.768143 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-frr-sockets\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.768160 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-reloader\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: E1014 07:01:52.768180 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-memberlist podName:dbf554ea-ea3c-4407-8f15-6a8c564d81e7 nodeName:}" failed. No retries permitted until 2025-10-14 07:01:53.268156112 +0000 UTC m=+861.179239958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-memberlist") pod "speaker-79nbt" (UID: "dbf554ea-ea3c-4407-8f15-6a8c564d81e7") : secret "metallb-memberlist" not found Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.768669 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-metallb-excludel2\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.768905 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/457eef4c-39c2-49a3-a77d-b41938e270de-metrics\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.772007 5058 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.776731 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/457eef4c-39c2-49a3-a77d-b41938e270de-metrics-certs\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.777245 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbd04243-c4a2-469c-8479-3de6c521fa51-metrics-certs\") pod \"controller-68d546b9d8-5l2rj\" (UID: \"dbd04243-c4a2-469c-8479-3de6c521fa51\") " pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.784180 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbd04243-c4a2-469c-8479-3de6c521fa51-cert\") pod \"controller-68d546b9d8-5l2rj\" (UID: \"dbd04243-c4a2-469c-8479-3de6c521fa51\") " pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.787715 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxzhc\" (UniqueName: \"kubernetes.io/projected/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-kube-api-access-wxzhc\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.792469 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xffx\" (UniqueName: \"kubernetes.io/projected/dbd04243-c4a2-469c-8479-3de6c521fa51-kube-api-access-4xffx\") pod \"controller-68d546b9d8-5l2rj\" (UID: \"dbd04243-c4a2-469c-8479-3de6c521fa51\") " pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.794871 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmvk\" (UniqueName: \"kubernetes.io/projected/457eef4c-39c2-49a3-a77d-b41938e270de-kube-api-access-nmmvk\") pod \"frr-k8s-t4dz4\" (UID: \"457eef4c-39c2-49a3-a77d-b41938e270de\") " pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.799558 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-474bm\" (UniqueName: \"kubernetes.io/projected/f5b555e5-444c-4c58-976e-0c041f7e5bba-kube-api-access-474bm\") pod \"frr-k8s-webhook-server-64bf5d555-ggpvr\" (UID: \"f5b555e5-444c-4c58-976e-0c041f7e5bba\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.824528 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:01:52 crc kubenswrapper[5058]: I1014 07:01:52.938257 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.134165 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-5l2rj"] Oct 14 07:01:53 crc kubenswrapper[5058]: W1014 07:01:53.142974 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbd04243_c4a2_469c_8479_3de6c521fa51.slice/crio-e7b30398c7042b99d20c2919dd40198c44a24eb57c309c1cbb68fa072a4302b1 WatchSource:0}: Error finding container e7b30398c7042b99d20c2919dd40198c44a24eb57c309c1cbb68fa072a4302b1: Status 404 returned error can't find the container with id e7b30398c7042b99d20c2919dd40198c44a24eb57c309c1cbb68fa072a4302b1 Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.267871 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5b555e5-444c-4c58-976e-0c041f7e5bba-cert\") pod \"frr-k8s-webhook-server-64bf5d555-ggpvr\" (UID: \"f5b555e5-444c-4c58-976e-0c041f7e5bba\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.267912 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-metrics-certs\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.272841 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5b555e5-444c-4c58-976e-0c041f7e5bba-cert\") pod \"frr-k8s-webhook-server-64bf5d555-ggpvr\" (UID: \"f5b555e5-444c-4c58-976e-0c041f7e5bba\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.273302 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-metrics-certs\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.369130 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-memberlist\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:53 crc kubenswrapper[5058]: E1014 07:01:53.369311 5058 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 07:01:53 crc kubenswrapper[5058]: E1014 07:01:53.369692 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-memberlist podName:dbf554ea-ea3c-4407-8f15-6a8c564d81e7 nodeName:}" failed. No retries permitted until 2025-10-14 07:01:54.36967035 +0000 UTC m=+862.280754156 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-memberlist") pod "speaker-79nbt" (UID: "dbf554ea-ea3c-4407-8f15-6a8c564d81e7") : secret "metallb-memberlist" not found Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.447107 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.703901 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr"] Oct 14 07:01:53 crc kubenswrapper[5058]: W1014 07:01:53.712255 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5b555e5_444c_4c58_976e_0c041f7e5bba.slice/crio-45e12b25e64dd5c5512034055dcc2b089ddfe4a622408cdbd30459475e8acedf WatchSource:0}: Error finding container 45e12b25e64dd5c5512034055dcc2b089ddfe4a622408cdbd30459475e8acedf: Status 404 returned error can't find the container with id 45e12b25e64dd5c5512034055dcc2b089ddfe4a622408cdbd30459475e8acedf Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.758788 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" event={"ID":"f5b555e5-444c-4c58-976e-0c041f7e5bba","Type":"ContainerStarted","Data":"45e12b25e64dd5c5512034055dcc2b089ddfe4a622408cdbd30459475e8acedf"} Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.760043 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-5l2rj" event={"ID":"dbd04243-c4a2-469c-8479-3de6c521fa51","Type":"ContainerStarted","Data":"ee058a23c26c3c73b11e207a08d5edae044b3c0eb5dcc010ee682e1f83b97903"} Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.760070 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-5l2rj" event={"ID":"dbd04243-c4a2-469c-8479-3de6c521fa51","Type":"ContainerStarted","Data":"a061f57de7a890b15f12ee814324c0bc0290a24d762fc3e868e20f96efffd1d8"} Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.760080 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-5l2rj" event={"ID":"dbd04243-c4a2-469c-8479-3de6c521fa51","Type":"ContainerStarted","Data":"e7b30398c7042b99d20c2919dd40198c44a24eb57c309c1cbb68fa072a4302b1"} Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.760279 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.762063 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerStarted","Data":"f4f04bf82c64e741f2e8a316cb9396ff702938d0a928ca3b240dadc19f8b1f08"} Oct 14 07:01:53 crc kubenswrapper[5058]: I1014 07:01:53.776083 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-5l2rj" podStartSLOduration=1.776065495 podStartE2EDuration="1.776065495s" podCreationTimestamp="2025-10-14 07:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:01:53.774447018 +0000 UTC m=+861.685530844" watchObservedRunningTime="2025-10-14 07:01:53.776065495 +0000 UTC m=+861.687149311" Oct 14 07:01:54 crc kubenswrapper[5058]: I1014 07:01:54.389334 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-memberlist\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:54 crc kubenswrapper[5058]: I1014 07:01:54.398664 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbf554ea-ea3c-4407-8f15-6a8c564d81e7-memberlist\") pod \"speaker-79nbt\" (UID: \"dbf554ea-ea3c-4407-8f15-6a8c564d81e7\") " pod="metallb-system/speaker-79nbt" Oct 14 07:01:54 crc kubenswrapper[5058]: I1014 07:01:54.429935 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-79nbt" Oct 14 07:01:54 crc kubenswrapper[5058]: W1014 07:01:54.459776 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf554ea_ea3c_4407_8f15_6a8c564d81e7.slice/crio-6f432b9f780f8a38dc77477aa7167f966fab09c43cfd4a80a0eb8d29f9b64277 WatchSource:0}: Error finding container 6f432b9f780f8a38dc77477aa7167f966fab09c43cfd4a80a0eb8d29f9b64277: Status 404 returned error can't find the container with id 6f432b9f780f8a38dc77477aa7167f966fab09c43cfd4a80a0eb8d29f9b64277 Oct 14 07:01:54 crc kubenswrapper[5058]: I1014 07:01:54.768984 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-79nbt" event={"ID":"dbf554ea-ea3c-4407-8f15-6a8c564d81e7","Type":"ContainerStarted","Data":"b2b2b9282ac12104944117b697e1af280f14a11a3d58bbdfa27430cab6a919f9"} Oct 14 07:01:54 crc kubenswrapper[5058]: I1014 07:01:54.770191 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-79nbt" event={"ID":"dbf554ea-ea3c-4407-8f15-6a8c564d81e7","Type":"ContainerStarted","Data":"6f432b9f780f8a38dc77477aa7167f966fab09c43cfd4a80a0eb8d29f9b64277"} Oct 14 07:01:55 crc kubenswrapper[5058]: I1014 07:01:55.779246 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-79nbt" event={"ID":"dbf554ea-ea3c-4407-8f15-6a8c564d81e7","Type":"ContainerStarted","Data":"2c85669bdaaf41524678db0444f703442eba52ad2aa0ed705448144d1b57e667"} Oct 14 07:01:55 crc kubenswrapper[5058]: I1014 07:01:55.780276 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-79nbt" Oct 14 07:01:55 crc kubenswrapper[5058]: I1014 07:01:55.811344 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-79nbt" podStartSLOduration=3.811329332 podStartE2EDuration="3.811329332s" podCreationTimestamp="2025-10-14 07:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:01:55.810277152 +0000 UTC m=+863.721360968" watchObservedRunningTime="2025-10-14 07:01:55.811329332 +0000 UTC m=+863.722413138" Oct 14 07:02:00 crc kubenswrapper[5058]: I1014 07:02:00.817615 5058 generic.go:334] "Generic (PLEG): container finished" podID="457eef4c-39c2-49a3-a77d-b41938e270de" containerID="f1efa190513184ccf4304dabe4a00e74c5d77c297c4d123f5bbe2bd682298255" exitCode=0 Oct 14 07:02:00 crc kubenswrapper[5058]: I1014 07:02:00.817875 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerDied","Data":"f1efa190513184ccf4304dabe4a00e74c5d77c297c4d123f5bbe2bd682298255"} Oct 14 07:02:00 crc kubenswrapper[5058]: I1014 07:02:00.821434 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" event={"ID":"f5b555e5-444c-4c58-976e-0c041f7e5bba","Type":"ContainerStarted","Data":"e8925a8989c1658eb95576c6d25ac0ea3cf7727edf0e688e0d80deea1c9897fb"} Oct 14 07:02:00 crc kubenswrapper[5058]: I1014 07:02:00.821764 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:02:00 crc kubenswrapper[5058]: I1014 07:02:00.893184 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" podStartSLOduration=2.1625027 podStartE2EDuration="8.893163166s" podCreationTimestamp="2025-10-14 07:01:52 +0000 UTC" firstStartedPulling="2025-10-14 07:01:53.714487669 +0000 UTC m=+861.625571485" lastFinishedPulling="2025-10-14 07:02:00.445148135 +0000 UTC m=+868.356231951" observedRunningTime="2025-10-14 07:02:00.887101803 +0000 UTC m=+868.798185639" watchObservedRunningTime="2025-10-14 07:02:00.893163166 +0000 UTC m=+868.804246982" Oct 14 07:02:01 crc kubenswrapper[5058]: I1014 07:02:01.829594 5058 generic.go:334] "Generic (PLEG): container finished" podID="457eef4c-39c2-49a3-a77d-b41938e270de" containerID="3b04ca0a23a7bd25f599ed67e6d3e98a2e4b2bd882578f07b937371a17db3adb" exitCode=0 Oct 14 07:02:01 crc kubenswrapper[5058]: I1014 07:02:01.829656 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerDied","Data":"3b04ca0a23a7bd25f599ed67e6d3e98a2e4b2bd882578f07b937371a17db3adb"} Oct 14 07:02:02 crc kubenswrapper[5058]: I1014 07:02:02.839656 5058 generic.go:334] "Generic (PLEG): container finished" podID="457eef4c-39c2-49a3-a77d-b41938e270de" containerID="f787f12ea49e32d79d58e94996073b72b5537618327b6abf36a9d587604fa934" exitCode=0 Oct 14 07:02:02 crc kubenswrapper[5058]: I1014 07:02:02.839721 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerDied","Data":"f787f12ea49e32d79d58e94996073b72b5537618327b6abf36a9d587604fa934"} Oct 14 07:02:03 crc kubenswrapper[5058]: I1014 07:02:03.655922 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:02:03 crc kubenswrapper[5058]: I1014 07:02:03.656235 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:02:03 crc kubenswrapper[5058]: I1014 07:02:03.854179 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerStarted","Data":"15017fa6248642bd12fb1a1445737708b8a325a1c5f1dfdc1df1e2a27996e48c"} Oct 14 07:02:03 crc kubenswrapper[5058]: I1014 07:02:03.854223 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerStarted","Data":"7b028576ea20cd8912b9e2eaff1798a04aa1bbba374e2d6f89c815db209734f2"} Oct 14 07:02:03 crc kubenswrapper[5058]: I1014 07:02:03.854233 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerStarted","Data":"3d9b60bf8dc000384ee4aeec9613265b927f5f44ca235101d027745b1491c937"} Oct 14 07:02:03 crc kubenswrapper[5058]: I1014 07:02:03.854243 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerStarted","Data":"4e9db6134a97d03228b83975f56dbe51ac6ceaaf56386d798967493f693571c4"} Oct 14 07:02:03 crc kubenswrapper[5058]: I1014 07:02:03.854252 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerStarted","Data":"aea62a6a20814648ce1b4d67f4e78986331547b27e4033761e2729949046556e"} Oct 14 07:02:04 crc kubenswrapper[5058]: I1014 07:02:04.435242 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-79nbt" Oct 14 07:02:04 crc kubenswrapper[5058]: I1014 07:02:04.867957 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t4dz4" event={"ID":"457eef4c-39c2-49a3-a77d-b41938e270de","Type":"ContainerStarted","Data":"de2740de5f4c9aa3dd47e578bda8b0f7dfdced95daa31586e063db534a42e2bc"} Oct 14 07:02:04 crc kubenswrapper[5058]: I1014 07:02:04.868274 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:02:04 crc kubenswrapper[5058]: I1014 07:02:04.902729 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t4dz4" podStartSLOduration=5.455533592 podStartE2EDuration="12.902703313s" podCreationTimestamp="2025-10-14 07:01:52 +0000 UTC" firstStartedPulling="2025-10-14 07:01:52.970423428 +0000 UTC m=+860.881507234" lastFinishedPulling="2025-10-14 07:02:00.417593139 +0000 UTC m=+868.328676955" observedRunningTime="2025-10-14 07:02:04.895497888 +0000 UTC m=+872.806581714" watchObservedRunningTime="2025-10-14 07:02:04.902703313 +0000 UTC m=+872.813787159" Oct 14 07:02:05 crc kubenswrapper[5058]: I1014 07:02:05.885514 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85"] Oct 14 07:02:05 crc kubenswrapper[5058]: I1014 07:02:05.886986 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:05 crc kubenswrapper[5058]: I1014 07:02:05.895610 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 07:02:05 crc kubenswrapper[5058]: I1014 07:02:05.899216 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85"] Oct 14 07:02:05 crc kubenswrapper[5058]: I1014 07:02:05.972733 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq4pq\" (UniqueName: \"kubernetes.io/projected/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-kube-api-access-rq4pq\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:05 crc kubenswrapper[5058]: I1014 07:02:05.972834 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:05 crc kubenswrapper[5058]: I1014 07:02:05.972857 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.074312 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq4pq\" (UniqueName: \"kubernetes.io/projected/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-kube-api-access-rq4pq\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.074416 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.074450 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.075216 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.075255 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.099833 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq4pq\" (UniqueName: \"kubernetes.io/projected/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-kube-api-access-rq4pq\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.211127 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.581137 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85"] Oct 14 07:02:06 crc kubenswrapper[5058]: W1014 07:02:06.594000 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod419dbddd_983d_4eee_8e5a_9ad9ee3de5b2.slice/crio-61fa41103cae06191c9cbe386a9781a25d91a2669ff3b3725f9566b76b79b700 WatchSource:0}: Error finding container 61fa41103cae06191c9cbe386a9781a25d91a2669ff3b3725f9566b76b79b700: Status 404 returned error can't find the container with id 61fa41103cae06191c9cbe386a9781a25d91a2669ff3b3725f9566b76b79b700 Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.883560 5058 generic.go:334] "Generic (PLEG): container finished" podID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerID="2bb3180f3e2f8994dfae9d964529131716c0895ba214f7106af15539cf6ae9fa" exitCode=0 Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.883603 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" event={"ID":"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2","Type":"ContainerDied","Data":"2bb3180f3e2f8994dfae9d964529131716c0895ba214f7106af15539cf6ae9fa"} Oct 14 07:02:06 crc kubenswrapper[5058]: I1014 07:02:06.883628 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" event={"ID":"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2","Type":"ContainerStarted","Data":"61fa41103cae06191c9cbe386a9781a25d91a2669ff3b3725f9566b76b79b700"} Oct 14 07:02:07 crc kubenswrapper[5058]: I1014 07:02:07.825184 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:02:07 crc kubenswrapper[5058]: I1014 07:02:07.888682 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:02:10 crc kubenswrapper[5058]: I1014 07:02:10.920200 5058 generic.go:334] "Generic (PLEG): container finished" podID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerID="abbfd536fc0cef243712c5bcb252958a3710a435bfa5891f12492c79b3eb8e02" exitCode=0 Oct 14 07:02:10 crc kubenswrapper[5058]: I1014 07:02:10.920345 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" event={"ID":"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2","Type":"ContainerDied","Data":"abbfd536fc0cef243712c5bcb252958a3710a435bfa5891f12492c79b3eb8e02"} Oct 14 07:02:11 crc kubenswrapper[5058]: I1014 07:02:11.934447 5058 generic.go:334] "Generic (PLEG): container finished" podID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerID="22dcb711f158d4b633dd1dc3407e93382b6ad9da20096e821cc5b0c76dcadd23" exitCode=0 Oct 14 07:02:11 crc kubenswrapper[5058]: I1014 07:02:11.934507 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" event={"ID":"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2","Type":"ContainerDied","Data":"22dcb711f158d4b633dd1dc3407e93382b6ad9da20096e821cc5b0c76dcadd23"} Oct 14 07:02:12 crc kubenswrapper[5058]: I1014 07:02:12.827490 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t4dz4" Oct 14 07:02:12 crc kubenswrapper[5058]: I1014 07:02:12.941133 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-5l2rj" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.256175 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.369822 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq4pq\" (UniqueName: \"kubernetes.io/projected/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-kube-api-access-rq4pq\") pod \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.369866 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-bundle\") pod \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.370036 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-util\") pod \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\" (UID: \"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2\") " Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.372181 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-bundle" (OuterVolumeSpecName: "bundle") pod "419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" (UID: "419dbddd-983d-4eee-8e5a-9ad9ee3de5b2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.381256 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-kube-api-access-rq4pq" (OuterVolumeSpecName: "kube-api-access-rq4pq") pod "419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" (UID: "419dbddd-983d-4eee-8e5a-9ad9ee3de5b2"). InnerVolumeSpecName "kube-api-access-rq4pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.384960 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-util" (OuterVolumeSpecName: "util") pod "419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" (UID: "419dbddd-983d-4eee-8e5a-9ad9ee3de5b2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.455371 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-ggpvr" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.471604 5058 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-util\") on node \"crc\" DevicePath \"\"" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.471636 5058 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.471740 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq4pq\" (UniqueName: \"kubernetes.io/projected/419dbddd-983d-4eee-8e5a-9ad9ee3de5b2-kube-api-access-rq4pq\") on node \"crc\" DevicePath \"\"" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.956087 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" event={"ID":"419dbddd-983d-4eee-8e5a-9ad9ee3de5b2","Type":"ContainerDied","Data":"61fa41103cae06191c9cbe386a9781a25d91a2669ff3b3725f9566b76b79b700"} Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.956485 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61fa41103cae06191c9cbe386a9781a25d91a2669ff3b3725f9566b76b79b700" Oct 14 07:02:13 crc kubenswrapper[5058]: I1014 07:02:13.956190 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85" Oct 14 07:02:18 crc kubenswrapper[5058]: I1014 07:02:18.968607 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx"] Oct 14 07:02:18 crc kubenswrapper[5058]: E1014 07:02:18.970226 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerName="extract" Oct 14 07:02:18 crc kubenswrapper[5058]: I1014 07:02:18.970275 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerName="extract" Oct 14 07:02:18 crc kubenswrapper[5058]: E1014 07:02:18.970296 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerName="pull" Oct 14 07:02:18 crc kubenswrapper[5058]: I1014 07:02:18.970312 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerName="pull" Oct 14 07:02:18 crc kubenswrapper[5058]: E1014 07:02:18.970334 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerName="util" Oct 14 07:02:18 crc kubenswrapper[5058]: I1014 07:02:18.970346 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerName="util" Oct 14 07:02:18 crc kubenswrapper[5058]: I1014 07:02:18.970584 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="419dbddd-983d-4eee-8e5a-9ad9ee3de5b2" containerName="extract" Oct 14 07:02:18 crc kubenswrapper[5058]: I1014 07:02:18.971285 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx" Oct 14 07:02:18 crc kubenswrapper[5058]: I1014 07:02:18.975149 5058 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-tlrb2" Oct 14 07:02:18 crc kubenswrapper[5058]: I1014 07:02:18.975533 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Oct 14 07:02:18 crc kubenswrapper[5058]: I1014 07:02:18.975749 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Oct 14 07:02:19 crc kubenswrapper[5058]: I1014 07:02:19.003828 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx"] Oct 14 07:02:19 crc kubenswrapper[5058]: I1014 07:02:19.141078 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qftmv\" (UniqueName: \"kubernetes.io/projected/0a3e1ba1-8db8-4b3e-a77f-7d1435dfe785-kube-api-access-qftmv\") pod \"cert-manager-operator-controller-manager-57cd46d6d-8xtlx\" (UID: \"0a3e1ba1-8db8-4b3e-a77f-7d1435dfe785\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx" Oct 14 07:02:19 crc kubenswrapper[5058]: I1014 07:02:19.242169 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qftmv\" (UniqueName: \"kubernetes.io/projected/0a3e1ba1-8db8-4b3e-a77f-7d1435dfe785-kube-api-access-qftmv\") pod \"cert-manager-operator-controller-manager-57cd46d6d-8xtlx\" (UID: \"0a3e1ba1-8db8-4b3e-a77f-7d1435dfe785\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx" Oct 14 07:02:19 crc kubenswrapper[5058]: I1014 07:02:19.276919 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qftmv\" (UniqueName: \"kubernetes.io/projected/0a3e1ba1-8db8-4b3e-a77f-7d1435dfe785-kube-api-access-qftmv\") pod \"cert-manager-operator-controller-manager-57cd46d6d-8xtlx\" (UID: \"0a3e1ba1-8db8-4b3e-a77f-7d1435dfe785\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx" Oct 14 07:02:19 crc kubenswrapper[5058]: I1014 07:02:19.294042 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx" Oct 14 07:02:19 crc kubenswrapper[5058]: I1014 07:02:19.766238 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx"] Oct 14 07:02:19 crc kubenswrapper[5058]: W1014 07:02:19.775276 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a3e1ba1_8db8_4b3e_a77f_7d1435dfe785.slice/crio-253a84d171ebb9f1b2e7cabe09a2e507221251ece7742528d7f23671ca3113db WatchSource:0}: Error finding container 253a84d171ebb9f1b2e7cabe09a2e507221251ece7742528d7f23671ca3113db: Status 404 returned error can't find the container with id 253a84d171ebb9f1b2e7cabe09a2e507221251ece7742528d7f23671ca3113db Oct 14 07:02:19 crc kubenswrapper[5058]: I1014 07:02:19.996094 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx" event={"ID":"0a3e1ba1-8db8-4b3e-a77f-7d1435dfe785","Type":"ContainerStarted","Data":"253a84d171ebb9f1b2e7cabe09a2e507221251ece7742528d7f23671ca3113db"} Oct 14 07:02:27 crc kubenswrapper[5058]: I1014 07:02:27.045772 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx" event={"ID":"0a3e1ba1-8db8-4b3e-a77f-7d1435dfe785","Type":"ContainerStarted","Data":"42053ee1b0b2a80fa823effb56184833887fd65c1960cb617a6c2a3553c8f623"} Oct 14 07:02:27 crc kubenswrapper[5058]: I1014 07:02:27.073051 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-8xtlx" podStartSLOduration=2.297734369 podStartE2EDuration="9.073033207s" podCreationTimestamp="2025-10-14 07:02:18 +0000 UTC" firstStartedPulling="2025-10-14 07:02:19.776915942 +0000 UTC m=+887.687999758" lastFinishedPulling="2025-10-14 07:02:26.55221479 +0000 UTC m=+894.463298596" observedRunningTime="2025-10-14 07:02:27.07033624 +0000 UTC m=+894.981420056" watchObservedRunningTime="2025-10-14 07:02:27.073033207 +0000 UTC m=+894.984117023" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.329021 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-8t8vv"] Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.330036 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.332042 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.332135 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.332357 5058 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hd8dw" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.342893 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-8t8vv"] Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.480006 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82f63d1b-6c1d-40ea-a329-13bc585406f9-bound-sa-token\") pod \"cert-manager-webhook-d969966f-8t8vv\" (UID: \"82f63d1b-6c1d-40ea-a329-13bc585406f9\") " pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.480071 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5fw\" (UniqueName: \"kubernetes.io/projected/82f63d1b-6c1d-40ea-a329-13bc585406f9-kube-api-access-md5fw\") pod \"cert-manager-webhook-d969966f-8t8vv\" (UID: \"82f63d1b-6c1d-40ea-a329-13bc585406f9\") " pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.581352 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82f63d1b-6c1d-40ea-a329-13bc585406f9-bound-sa-token\") pod \"cert-manager-webhook-d969966f-8t8vv\" (UID: \"82f63d1b-6c1d-40ea-a329-13bc585406f9\") " pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.581408 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5fw\" (UniqueName: \"kubernetes.io/projected/82f63d1b-6c1d-40ea-a329-13bc585406f9-kube-api-access-md5fw\") pod \"cert-manager-webhook-d969966f-8t8vv\" (UID: \"82f63d1b-6c1d-40ea-a329-13bc585406f9\") " pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.608587 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82f63d1b-6c1d-40ea-a329-13bc585406f9-bound-sa-token\") pod \"cert-manager-webhook-d969966f-8t8vv\" (UID: \"82f63d1b-6c1d-40ea-a329-13bc585406f9\") " pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.630461 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5fw\" (UniqueName: \"kubernetes.io/projected/82f63d1b-6c1d-40ea-a329-13bc585406f9-kube-api-access-md5fw\") pod \"cert-manager-webhook-d969966f-8t8vv\" (UID: \"82f63d1b-6c1d-40ea-a329-13bc585406f9\") " pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:29 crc kubenswrapper[5058]: I1014 07:02:29.653315 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:30 crc kubenswrapper[5058]: I1014 07:02:30.113760 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-8t8vv"] Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.069781 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" event={"ID":"82f63d1b-6c1d-40ea-a329-13bc585406f9","Type":"ContainerStarted","Data":"2e44f164a38071928442bca313a6d7434998fc5d11642de52f7dddf26273bc11"} Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.178918 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45"] Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.179976 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.182127 5058 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wpbvb" Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.190442 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45"] Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.309115 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88v5x\" (UniqueName: \"kubernetes.io/projected/18cbb842-b89d-470a-8bcc-aee3eae87148-kube-api-access-88v5x\") pod \"cert-manager-cainjector-7d9f95dbf-n6c45\" (UID: \"18cbb842-b89d-470a-8bcc-aee3eae87148\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.309195 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18cbb842-b89d-470a-8bcc-aee3eae87148-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-n6c45\" (UID: \"18cbb842-b89d-470a-8bcc-aee3eae87148\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.410661 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88v5x\" (UniqueName: \"kubernetes.io/projected/18cbb842-b89d-470a-8bcc-aee3eae87148-kube-api-access-88v5x\") pod \"cert-manager-cainjector-7d9f95dbf-n6c45\" (UID: \"18cbb842-b89d-470a-8bcc-aee3eae87148\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.410736 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18cbb842-b89d-470a-8bcc-aee3eae87148-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-n6c45\" (UID: \"18cbb842-b89d-470a-8bcc-aee3eae87148\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.429460 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18cbb842-b89d-470a-8bcc-aee3eae87148-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-n6c45\" (UID: \"18cbb842-b89d-470a-8bcc-aee3eae87148\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.431179 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88v5x\" (UniqueName: \"kubernetes.io/projected/18cbb842-b89d-470a-8bcc-aee3eae87148-kube-api-access-88v5x\") pod \"cert-manager-cainjector-7d9f95dbf-n6c45\" (UID: \"18cbb842-b89d-470a-8bcc-aee3eae87148\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.502115 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" Oct 14 07:02:31 crc kubenswrapper[5058]: I1014 07:02:31.792212 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45"] Oct 14 07:02:31 crc kubenswrapper[5058]: W1014 07:02:31.808223 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18cbb842_b89d_470a_8bcc_aee3eae87148.slice/crio-c4a05d8e5b9326ab33d04f658c742af460510f174d90cb8e0d34c170035dcf61 WatchSource:0}: Error finding container c4a05d8e5b9326ab33d04f658c742af460510f174d90cb8e0d34c170035dcf61: Status 404 returned error can't find the container with id c4a05d8e5b9326ab33d04f658c742af460510f174d90cb8e0d34c170035dcf61 Oct 14 07:02:32 crc kubenswrapper[5058]: I1014 07:02:32.080499 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" event={"ID":"18cbb842-b89d-470a-8bcc-aee3eae87148","Type":"ContainerStarted","Data":"c4a05d8e5b9326ab33d04f658c742af460510f174d90cb8e0d34c170035dcf61"} Oct 14 07:02:33 crc kubenswrapper[5058]: I1014 07:02:33.656060 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:02:33 crc kubenswrapper[5058]: I1014 07:02:33.656487 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:02:35 crc kubenswrapper[5058]: I1014 07:02:35.149336 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" event={"ID":"18cbb842-b89d-470a-8bcc-aee3eae87148","Type":"ContainerStarted","Data":"72eb6660f2e85b52ab9fb57dc2e9f5446dd7fc1f24e3e4eb9be27c9d737ed8f8"} Oct 14 07:02:35 crc kubenswrapper[5058]: I1014 07:02:35.150918 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" event={"ID":"82f63d1b-6c1d-40ea-a329-13bc585406f9","Type":"ContainerStarted","Data":"02f05f0b7c9868c1158268e83e450f27c4d7f258401ad5268e5f4d6edf4aba81"} Oct 14 07:02:35 crc kubenswrapper[5058]: I1014 07:02:35.151063 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:35 crc kubenswrapper[5058]: I1014 07:02:35.165248 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-n6c45" podStartSLOduration=1.468452231 podStartE2EDuration="4.165224245s" podCreationTimestamp="2025-10-14 07:02:31 +0000 UTC" firstStartedPulling="2025-10-14 07:02:31.812766959 +0000 UTC m=+899.723850755" lastFinishedPulling="2025-10-14 07:02:34.509538963 +0000 UTC m=+902.420622769" observedRunningTime="2025-10-14 07:02:35.161815968 +0000 UTC m=+903.072899794" watchObservedRunningTime="2025-10-14 07:02:35.165224245 +0000 UTC m=+903.076308081" Oct 14 07:02:35 crc kubenswrapper[5058]: I1014 07:02:35.181095 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" podStartSLOduration=1.814746719 podStartE2EDuration="6.181073997s" podCreationTimestamp="2025-10-14 07:02:29 +0000 UTC" firstStartedPulling="2025-10-14 07:02:30.128107965 +0000 UTC m=+898.039191771" lastFinishedPulling="2025-10-14 07:02:34.494435243 +0000 UTC m=+902.405519049" observedRunningTime="2025-10-14 07:02:35.179536003 +0000 UTC m=+903.090619829" watchObservedRunningTime="2025-10-14 07:02:35.181073997 +0000 UTC m=+903.092157793" Oct 14 07:02:39 crc kubenswrapper[5058]: I1014 07:02:39.659512 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-d969966f-8t8vv" Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.507967 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-t7w96"] Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.511159 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.514712 5058 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fzmks" Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.523650 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-t7w96"] Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.670576 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fc261d14-654d-4831-b263-57deaff5643c-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-t7w96\" (UID: \"fc261d14-654d-4831-b263-57deaff5643c\") " pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.670876 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47zjm\" (UniqueName: \"kubernetes.io/projected/fc261d14-654d-4831-b263-57deaff5643c-kube-api-access-47zjm\") pod \"cert-manager-7d4cc89fcb-t7w96\" (UID: \"fc261d14-654d-4831-b263-57deaff5643c\") " pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.771975 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47zjm\" (UniqueName: \"kubernetes.io/projected/fc261d14-654d-4831-b263-57deaff5643c-kube-api-access-47zjm\") pod \"cert-manager-7d4cc89fcb-t7w96\" (UID: \"fc261d14-654d-4831-b263-57deaff5643c\") " pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.772165 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fc261d14-654d-4831-b263-57deaff5643c-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-t7w96\" (UID: \"fc261d14-654d-4831-b263-57deaff5643c\") " pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.802600 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fc261d14-654d-4831-b263-57deaff5643c-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-t7w96\" (UID: \"fc261d14-654d-4831-b263-57deaff5643c\") " pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.802714 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47zjm\" (UniqueName: \"kubernetes.io/projected/fc261d14-654d-4831-b263-57deaff5643c-kube-api-access-47zjm\") pod \"cert-manager-7d4cc89fcb-t7w96\" (UID: \"fc261d14-654d-4831-b263-57deaff5643c\") " pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" Oct 14 07:02:48 crc kubenswrapper[5058]: I1014 07:02:48.850915 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" Oct 14 07:02:49 crc kubenswrapper[5058]: I1014 07:02:49.370628 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-t7w96"] Oct 14 07:02:49 crc kubenswrapper[5058]: W1014 07:02:49.374621 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc261d14_654d_4831_b263_57deaff5643c.slice/crio-9063a2279558be288ffce481c39f324792a2b5ed3fe077fc0d07ace061a3988d WatchSource:0}: Error finding container 9063a2279558be288ffce481c39f324792a2b5ed3fe077fc0d07ace061a3988d: Status 404 returned error can't find the container with id 9063a2279558be288ffce481c39f324792a2b5ed3fe077fc0d07ace061a3988d Oct 14 07:02:50 crc kubenswrapper[5058]: I1014 07:02:50.265424 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" event={"ID":"fc261d14-654d-4831-b263-57deaff5643c","Type":"ContainerStarted","Data":"21505947c5faf2ed51e951ba28f4b8843cecfea3944e9cd056b230764a769b88"} Oct 14 07:02:50 crc kubenswrapper[5058]: I1014 07:02:50.265838 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" event={"ID":"fc261d14-654d-4831-b263-57deaff5643c","Type":"ContainerStarted","Data":"9063a2279558be288ffce481c39f324792a2b5ed3fe077fc0d07ace061a3988d"} Oct 14 07:02:50 crc kubenswrapper[5058]: I1014 07:02:50.296842 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-7d4cc89fcb-t7w96" podStartSLOduration=2.296820519 podStartE2EDuration="2.296820519s" podCreationTimestamp="2025-10-14 07:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:02:50.290819298 +0000 UTC m=+918.201903144" watchObservedRunningTime="2025-10-14 07:02:50.296820519 +0000 UTC m=+918.207904345" Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.300176 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nhvrl"] Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.301901 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nhvrl" Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.307457 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-npnc7" Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.308277 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.308537 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.310236 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nhvrl"] Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.452418 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdgv\" (UniqueName: \"kubernetes.io/projected/ddaeed4c-5dde-4d91-ba93-3d55cb17e496-kube-api-access-xwdgv\") pod \"openstack-operator-index-nhvrl\" (UID: \"ddaeed4c-5dde-4d91-ba93-3d55cb17e496\") " pod="openstack-operators/openstack-operator-index-nhvrl" Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.553485 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwdgv\" (UniqueName: \"kubernetes.io/projected/ddaeed4c-5dde-4d91-ba93-3d55cb17e496-kube-api-access-xwdgv\") pod \"openstack-operator-index-nhvrl\" (UID: \"ddaeed4c-5dde-4d91-ba93-3d55cb17e496\") " pod="openstack-operators/openstack-operator-index-nhvrl" Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.572060 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwdgv\" (UniqueName: \"kubernetes.io/projected/ddaeed4c-5dde-4d91-ba93-3d55cb17e496-kube-api-access-xwdgv\") pod \"openstack-operator-index-nhvrl\" (UID: \"ddaeed4c-5dde-4d91-ba93-3d55cb17e496\") " pod="openstack-operators/openstack-operator-index-nhvrl" Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.632338 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nhvrl" Oct 14 07:02:53 crc kubenswrapper[5058]: I1014 07:02:53.878823 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nhvrl"] Oct 14 07:02:54 crc kubenswrapper[5058]: I1014 07:02:54.298376 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nhvrl" event={"ID":"ddaeed4c-5dde-4d91-ba93-3d55cb17e496","Type":"ContainerStarted","Data":"eedc48cd39b3d64fa81ce47cbe5041eb96ad646b5ffcbab239aecc08bbec455a"} Oct 14 07:02:55 crc kubenswrapper[5058]: I1014 07:02:55.310495 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nhvrl" event={"ID":"ddaeed4c-5dde-4d91-ba93-3d55cb17e496","Type":"ContainerStarted","Data":"63a62b31388da0fa6a1251e9613644caf32845bbda9ee959c499cdf80c0a12c8"} Oct 14 07:02:55 crc kubenswrapper[5058]: I1014 07:02:55.340100 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nhvrl" podStartSLOduration=1.185105141 podStartE2EDuration="2.340068535s" podCreationTimestamp="2025-10-14 07:02:53 +0000 UTC" firstStartedPulling="2025-10-14 07:02:53.881171488 +0000 UTC m=+921.792255314" lastFinishedPulling="2025-10-14 07:02:55.036134902 +0000 UTC m=+922.947218708" observedRunningTime="2025-10-14 07:02:55.332833268 +0000 UTC m=+923.243917174" watchObservedRunningTime="2025-10-14 07:02:55.340068535 +0000 UTC m=+923.251152381" Oct 14 07:02:56 crc kubenswrapper[5058]: I1014 07:02:56.655151 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nhvrl"] Oct 14 07:02:57 crc kubenswrapper[5058]: I1014 07:02:57.289318 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bc9r8"] Oct 14 07:02:57 crc kubenswrapper[5058]: I1014 07:02:57.290177 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bc9r8" Oct 14 07:02:57 crc kubenswrapper[5058]: I1014 07:02:57.312997 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bc9r8"] Oct 14 07:02:57 crc kubenswrapper[5058]: I1014 07:02:57.323066 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nhvrl" podUID="ddaeed4c-5dde-4d91-ba93-3d55cb17e496" containerName="registry-server" containerID="cri-o://63a62b31388da0fa6a1251e9613644caf32845bbda9ee959c499cdf80c0a12c8" gracePeriod=2 Oct 14 07:02:57 crc kubenswrapper[5058]: I1014 07:02:57.410625 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqnk6\" (UniqueName: \"kubernetes.io/projected/6bc99329-2101-4c16-ada2-1ab7df1e51ca-kube-api-access-lqnk6\") pod \"openstack-operator-index-bc9r8\" (UID: \"6bc99329-2101-4c16-ada2-1ab7df1e51ca\") " pod="openstack-operators/openstack-operator-index-bc9r8" Oct 14 07:02:57 crc kubenswrapper[5058]: I1014 07:02:57.511914 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqnk6\" (UniqueName: \"kubernetes.io/projected/6bc99329-2101-4c16-ada2-1ab7df1e51ca-kube-api-access-lqnk6\") pod \"openstack-operator-index-bc9r8\" (UID: \"6bc99329-2101-4c16-ada2-1ab7df1e51ca\") " pod="openstack-operators/openstack-operator-index-bc9r8" Oct 14 07:02:57 crc kubenswrapper[5058]: I1014 07:02:57.532082 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqnk6\" (UniqueName: \"kubernetes.io/projected/6bc99329-2101-4c16-ada2-1ab7df1e51ca-kube-api-access-lqnk6\") pod \"openstack-operator-index-bc9r8\" (UID: \"6bc99329-2101-4c16-ada2-1ab7df1e51ca\") " pod="openstack-operators/openstack-operator-index-bc9r8" Oct 14 07:02:57 crc kubenswrapper[5058]: I1014 07:02:57.613475 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bc9r8" Oct 14 07:02:58 crc kubenswrapper[5058]: I1014 07:02:58.049035 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bc9r8"] Oct 14 07:02:58 crc kubenswrapper[5058]: I1014 07:02:58.329784 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bc9r8" event={"ID":"6bc99329-2101-4c16-ada2-1ab7df1e51ca","Type":"ContainerStarted","Data":"c30b543ac692d319dee562dd921f19809cf5c210c3e37499d82e0fb2a5e94e9c"} Oct 14 07:02:58 crc kubenswrapper[5058]: I1014 07:02:58.332123 5058 generic.go:334] "Generic (PLEG): container finished" podID="ddaeed4c-5dde-4d91-ba93-3d55cb17e496" containerID="63a62b31388da0fa6a1251e9613644caf32845bbda9ee959c499cdf80c0a12c8" exitCode=0 Oct 14 07:02:58 crc kubenswrapper[5058]: I1014 07:02:58.332167 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nhvrl" event={"ID":"ddaeed4c-5dde-4d91-ba93-3d55cb17e496","Type":"ContainerDied","Data":"63a62b31388da0fa6a1251e9613644caf32845bbda9ee959c499cdf80c0a12c8"} Oct 14 07:02:58 crc kubenswrapper[5058]: I1014 07:02:58.661567 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nhvrl" Oct 14 07:02:58 crc kubenswrapper[5058]: I1014 07:02:58.729110 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwdgv\" (UniqueName: \"kubernetes.io/projected/ddaeed4c-5dde-4d91-ba93-3d55cb17e496-kube-api-access-xwdgv\") pod \"ddaeed4c-5dde-4d91-ba93-3d55cb17e496\" (UID: \"ddaeed4c-5dde-4d91-ba93-3d55cb17e496\") " Oct 14 07:02:58 crc kubenswrapper[5058]: I1014 07:02:58.738637 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddaeed4c-5dde-4d91-ba93-3d55cb17e496-kube-api-access-xwdgv" (OuterVolumeSpecName: "kube-api-access-xwdgv") pod "ddaeed4c-5dde-4d91-ba93-3d55cb17e496" (UID: "ddaeed4c-5dde-4d91-ba93-3d55cb17e496"). InnerVolumeSpecName "kube-api-access-xwdgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:02:58 crc kubenswrapper[5058]: I1014 07:02:58.830834 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwdgv\" (UniqueName: \"kubernetes.io/projected/ddaeed4c-5dde-4d91-ba93-3d55cb17e496-kube-api-access-xwdgv\") on node \"crc\" DevicePath \"\"" Oct 14 07:02:59 crc kubenswrapper[5058]: I1014 07:02:59.340873 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nhvrl" event={"ID":"ddaeed4c-5dde-4d91-ba93-3d55cb17e496","Type":"ContainerDied","Data":"eedc48cd39b3d64fa81ce47cbe5041eb96ad646b5ffcbab239aecc08bbec455a"} Oct 14 07:02:59 crc kubenswrapper[5058]: I1014 07:02:59.341406 5058 scope.go:117] "RemoveContainer" containerID="63a62b31388da0fa6a1251e9613644caf32845bbda9ee959c499cdf80c0a12c8" Oct 14 07:02:59 crc kubenswrapper[5058]: I1014 07:02:59.340914 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nhvrl" Oct 14 07:02:59 crc kubenswrapper[5058]: I1014 07:02:59.342639 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bc9r8" event={"ID":"6bc99329-2101-4c16-ada2-1ab7df1e51ca","Type":"ContainerStarted","Data":"9485a041169c31d60dfb1f083b8b6bc9ea2b6935c7179abb98068059c454e7de"} Oct 14 07:02:59 crc kubenswrapper[5058]: I1014 07:02:59.373175 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bc9r8" podStartSLOduration=1.914399266 podStartE2EDuration="2.373148843s" podCreationTimestamp="2025-10-14 07:02:57 +0000 UTC" firstStartedPulling="2025-10-14 07:02:58.057572072 +0000 UTC m=+925.968655878" lastFinishedPulling="2025-10-14 07:02:58.516321639 +0000 UTC m=+926.427405455" observedRunningTime="2025-10-14 07:02:59.36848303 +0000 UTC m=+927.279566846" watchObservedRunningTime="2025-10-14 07:02:59.373148843 +0000 UTC m=+927.284232649" Oct 14 07:02:59 crc kubenswrapper[5058]: I1014 07:02:59.387920 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nhvrl"] Oct 14 07:02:59 crc kubenswrapper[5058]: I1014 07:02:59.393581 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-nhvrl"] Oct 14 07:03:00 crc kubenswrapper[5058]: I1014 07:03:00.803218 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddaeed4c-5dde-4d91-ba93-3d55cb17e496" path="/var/lib/kubelet/pods/ddaeed4c-5dde-4d91-ba93-3d55cb17e496/volumes" Oct 14 07:03:03 crc kubenswrapper[5058]: I1014 07:03:03.657649 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:03:03 crc kubenswrapper[5058]: I1014 07:03:03.658210 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:03:03 crc kubenswrapper[5058]: I1014 07:03:03.658304 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:03:03 crc kubenswrapper[5058]: I1014 07:03:03.659338 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a65ec9f6646d74e8b5c645d5e10f222dfaf2fbd7a5ae9280134ea9c9569464ad"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:03:03 crc kubenswrapper[5058]: I1014 07:03:03.659470 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://a65ec9f6646d74e8b5c645d5e10f222dfaf2fbd7a5ae9280134ea9c9569464ad" gracePeriod=600 Oct 14 07:03:04 crc kubenswrapper[5058]: I1014 07:03:04.381875 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="a65ec9f6646d74e8b5c645d5e10f222dfaf2fbd7a5ae9280134ea9c9569464ad" exitCode=0 Oct 14 07:03:04 crc kubenswrapper[5058]: I1014 07:03:04.382095 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"a65ec9f6646d74e8b5c645d5e10f222dfaf2fbd7a5ae9280134ea9c9569464ad"} Oct 14 07:03:04 crc kubenswrapper[5058]: I1014 07:03:04.382182 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"11c4b957e32f21626508faf45924a31d54a73d0ce8b7652f6bc10e3c25dd0778"} Oct 14 07:03:04 crc kubenswrapper[5058]: I1014 07:03:04.382205 5058 scope.go:117] "RemoveContainer" containerID="e300c99e9aa5760363433f01fb5f81849230f77f3ce1c9de5aa4bba11aefb08d" Oct 14 07:03:07 crc kubenswrapper[5058]: I1014 07:03:07.614652 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bc9r8" Oct 14 07:03:07 crc kubenswrapper[5058]: I1014 07:03:07.615160 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-bc9r8" Oct 14 07:03:07 crc kubenswrapper[5058]: I1014 07:03:07.655600 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-bc9r8" Oct 14 07:03:08 crc kubenswrapper[5058]: I1014 07:03:08.479001 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-bc9r8" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.448118 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6"] Oct 14 07:03:15 crc kubenswrapper[5058]: E1014 07:03:15.449074 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddaeed4c-5dde-4d91-ba93-3d55cb17e496" containerName="registry-server" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.449091 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddaeed4c-5dde-4d91-ba93-3d55cb17e496" containerName="registry-server" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.449263 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddaeed4c-5dde-4d91-ba93-3d55cb17e496" containerName="registry-server" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.450530 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.454511 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rj28q" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.460632 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6"] Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.481947 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-bundle\") pod \"32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.482201 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkl8n\" (UniqueName: \"kubernetes.io/projected/f8113d5b-0ab3-4158-9f25-905f7d658e5c-kube-api-access-jkl8n\") pod \"32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.482447 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-util\") pod \"32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.584715 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-bundle\") pod \"32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.584783 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkl8n\" (UniqueName: \"kubernetes.io/projected/f8113d5b-0ab3-4158-9f25-905f7d658e5c-kube-api-access-jkl8n\") pod \"32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.584833 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-util\") pod \"32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.585654 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-util\") pod \"32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.585646 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-bundle\") pod \"32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.607387 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkl8n\" (UniqueName: \"kubernetes.io/projected/f8113d5b-0ab3-4158-9f25-905f7d658e5c-kube-api-access-jkl8n\") pod \"32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:15 crc kubenswrapper[5058]: I1014 07:03:15.782204 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:16 crc kubenswrapper[5058]: I1014 07:03:16.072265 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6"] Oct 14 07:03:16 crc kubenswrapper[5058]: I1014 07:03:16.498227 5058 generic.go:334] "Generic (PLEG): container finished" podID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerID="292c9962fa384283bf46603b046c7d95526b9b0f48770581c3b02c7306a3334c" exitCode=0 Oct 14 07:03:16 crc kubenswrapper[5058]: I1014 07:03:16.498316 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" event={"ID":"f8113d5b-0ab3-4158-9f25-905f7d658e5c","Type":"ContainerDied","Data":"292c9962fa384283bf46603b046c7d95526b9b0f48770581c3b02c7306a3334c"} Oct 14 07:03:16 crc kubenswrapper[5058]: I1014 07:03:16.498646 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" event={"ID":"f8113d5b-0ab3-4158-9f25-905f7d658e5c","Type":"ContainerStarted","Data":"f9a013622a89c3da4e672950fa970cc4ae7ec7b9c3e7cc2ab86011ca79f9b8fb"} Oct 14 07:03:18 crc kubenswrapper[5058]: I1014 07:03:18.521375 5058 generic.go:334] "Generic (PLEG): container finished" podID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerID="86d16d9795741458f9e9b275d2d7d0082b9941caeeda30fe596991e1de52ac7c" exitCode=0 Oct 14 07:03:18 crc kubenswrapper[5058]: I1014 07:03:18.521511 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" event={"ID":"f8113d5b-0ab3-4158-9f25-905f7d658e5c","Type":"ContainerDied","Data":"86d16d9795741458f9e9b275d2d7d0082b9941caeeda30fe596991e1de52ac7c"} Oct 14 07:03:19 crc kubenswrapper[5058]: I1014 07:03:19.531255 5058 generic.go:334] "Generic (PLEG): container finished" podID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerID="f694f0bcc960ed4e8c4c1b18c9492dba230978072a05013da305e6369e2e08a0" exitCode=0 Oct 14 07:03:19 crc kubenswrapper[5058]: I1014 07:03:19.531312 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" event={"ID":"f8113d5b-0ab3-4158-9f25-905f7d658e5c","Type":"ContainerDied","Data":"f694f0bcc960ed4e8c4c1b18c9492dba230978072a05013da305e6369e2e08a0"} Oct 14 07:03:20 crc kubenswrapper[5058]: I1014 07:03:20.829767 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:20 crc kubenswrapper[5058]: I1014 07:03:20.905626 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-util\") pod \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " Oct 14 07:03:20 crc kubenswrapper[5058]: I1014 07:03:20.905690 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkl8n\" (UniqueName: \"kubernetes.io/projected/f8113d5b-0ab3-4158-9f25-905f7d658e5c-kube-api-access-jkl8n\") pod \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " Oct 14 07:03:20 crc kubenswrapper[5058]: I1014 07:03:20.905716 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-bundle\") pod \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\" (UID: \"f8113d5b-0ab3-4158-9f25-905f7d658e5c\") " Oct 14 07:03:20 crc kubenswrapper[5058]: I1014 07:03:20.906402 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-bundle" (OuterVolumeSpecName: "bundle") pod "f8113d5b-0ab3-4158-9f25-905f7d658e5c" (UID: "f8113d5b-0ab3-4158-9f25-905f7d658e5c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:03:20 crc kubenswrapper[5058]: I1014 07:03:20.912085 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8113d5b-0ab3-4158-9f25-905f7d658e5c-kube-api-access-jkl8n" (OuterVolumeSpecName: "kube-api-access-jkl8n") pod "f8113d5b-0ab3-4158-9f25-905f7d658e5c" (UID: "f8113d5b-0ab3-4158-9f25-905f7d658e5c"). InnerVolumeSpecName "kube-api-access-jkl8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:03:20 crc kubenswrapper[5058]: I1014 07:03:20.921108 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-util" (OuterVolumeSpecName: "util") pod "f8113d5b-0ab3-4158-9f25-905f7d658e5c" (UID: "f8113d5b-0ab3-4158-9f25-905f7d658e5c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:03:21 crc kubenswrapper[5058]: I1014 07:03:21.006979 5058 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-util\") on node \"crc\" DevicePath \"\"" Oct 14 07:03:21 crc kubenswrapper[5058]: I1014 07:03:21.007021 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkl8n\" (UniqueName: \"kubernetes.io/projected/f8113d5b-0ab3-4158-9f25-905f7d658e5c-kube-api-access-jkl8n\") on node \"crc\" DevicePath \"\"" Oct 14 07:03:21 crc kubenswrapper[5058]: I1014 07:03:21.007033 5058 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8113d5b-0ab3-4158-9f25-905f7d658e5c-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:03:21 crc kubenswrapper[5058]: I1014 07:03:21.551615 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" event={"ID":"f8113d5b-0ab3-4158-9f25-905f7d658e5c","Type":"ContainerDied","Data":"f9a013622a89c3da4e672950fa970cc4ae7ec7b9c3e7cc2ab86011ca79f9b8fb"} Oct 14 07:03:21 crc kubenswrapper[5058]: I1014 07:03:21.552192 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a013622a89c3da4e672950fa970cc4ae7ec7b9c3e7cc2ab86011ca79f9b8fb" Oct 14 07:03:21 crc kubenswrapper[5058]: I1014 07:03:21.551710 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6" Oct 14 07:03:27 crc kubenswrapper[5058]: I1014 07:03:27.813089 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp"] Oct 14 07:03:27 crc kubenswrapper[5058]: E1014 07:03:27.813922 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerName="extract" Oct 14 07:03:27 crc kubenswrapper[5058]: I1014 07:03:27.813940 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerName="extract" Oct 14 07:03:27 crc kubenswrapper[5058]: E1014 07:03:27.813950 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerName="pull" Oct 14 07:03:27 crc kubenswrapper[5058]: I1014 07:03:27.813957 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerName="pull" Oct 14 07:03:27 crc kubenswrapper[5058]: E1014 07:03:27.813970 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerName="util" Oct 14 07:03:27 crc kubenswrapper[5058]: I1014 07:03:27.813978 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerName="util" Oct 14 07:03:27 crc kubenswrapper[5058]: I1014 07:03:27.814120 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8113d5b-0ab3-4158-9f25-905f7d658e5c" containerName="extract" Oct 14 07:03:27 crc kubenswrapper[5058]: I1014 07:03:27.814873 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" Oct 14 07:03:27 crc kubenswrapper[5058]: I1014 07:03:27.817207 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-f7lgt" Oct 14 07:03:27 crc kubenswrapper[5058]: I1014 07:03:27.834904 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp"] Oct 14 07:03:27 crc kubenswrapper[5058]: I1014 07:03:27.907356 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpmjr\" (UniqueName: \"kubernetes.io/projected/364b619b-fb9b-4f5c-97ea-61c43987ea42-kube-api-access-xpmjr\") pod \"openstack-operator-controller-operator-64895cd698-zp5kp\" (UID: \"364b619b-fb9b-4f5c-97ea-61c43987ea42\") " pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" Oct 14 07:03:28 crc kubenswrapper[5058]: I1014 07:03:28.008962 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpmjr\" (UniqueName: \"kubernetes.io/projected/364b619b-fb9b-4f5c-97ea-61c43987ea42-kube-api-access-xpmjr\") pod \"openstack-operator-controller-operator-64895cd698-zp5kp\" (UID: \"364b619b-fb9b-4f5c-97ea-61c43987ea42\") " pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" Oct 14 07:03:28 crc kubenswrapper[5058]: I1014 07:03:28.034242 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpmjr\" (UniqueName: \"kubernetes.io/projected/364b619b-fb9b-4f5c-97ea-61c43987ea42-kube-api-access-xpmjr\") pod \"openstack-operator-controller-operator-64895cd698-zp5kp\" (UID: \"364b619b-fb9b-4f5c-97ea-61c43987ea42\") " pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" Oct 14 07:03:28 crc kubenswrapper[5058]: I1014 07:03:28.140026 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" Oct 14 07:03:28 crc kubenswrapper[5058]: I1014 07:03:28.464796 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp"] Oct 14 07:03:28 crc kubenswrapper[5058]: W1014 07:03:28.469737 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod364b619b_fb9b_4f5c_97ea_61c43987ea42.slice/crio-56da0a461caf50bd1d696527d27d18da9ab34daa14e727a15f7a0ec250a7c3fd WatchSource:0}: Error finding container 56da0a461caf50bd1d696527d27d18da9ab34daa14e727a15f7a0ec250a7c3fd: Status 404 returned error can't find the container with id 56da0a461caf50bd1d696527d27d18da9ab34daa14e727a15f7a0ec250a7c3fd Oct 14 07:03:28 crc kubenswrapper[5058]: I1014 07:03:28.602503 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" event={"ID":"364b619b-fb9b-4f5c-97ea-61c43987ea42","Type":"ContainerStarted","Data":"56da0a461caf50bd1d696527d27d18da9ab34daa14e727a15f7a0ec250a7c3fd"} Oct 14 07:03:33 crc kubenswrapper[5058]: I1014 07:03:33.648762 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" event={"ID":"364b619b-fb9b-4f5c-97ea-61c43987ea42","Type":"ContainerStarted","Data":"956536be7aacdb1d0442566f35a73a0110b20ae5e56208223a21a825bb85196a"} Oct 14 07:03:35 crc kubenswrapper[5058]: I1014 07:03:35.663618 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" event={"ID":"364b619b-fb9b-4f5c-97ea-61c43987ea42","Type":"ContainerStarted","Data":"9e6a5811f3fc155335d1b1b6e97d4d43bbbe700af11f344b1959e6de9c2f431b"} Oct 14 07:03:35 crc kubenswrapper[5058]: I1014 07:03:35.664020 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" Oct 14 07:03:35 crc kubenswrapper[5058]: I1014 07:03:35.691577 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" podStartSLOduration=1.656298828 podStartE2EDuration="8.691559787s" podCreationTimestamp="2025-10-14 07:03:27 +0000 UTC" firstStartedPulling="2025-10-14 07:03:28.47502208 +0000 UTC m=+956.386105886" lastFinishedPulling="2025-10-14 07:03:35.510283039 +0000 UTC m=+963.421366845" observedRunningTime="2025-10-14 07:03:35.687910183 +0000 UTC m=+963.598993999" watchObservedRunningTime="2025-10-14 07:03:35.691559787 +0000 UTC m=+963.602643603" Oct 14 07:03:38 crc kubenswrapper[5058]: I1014 07:03:38.143881 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-64895cd698-zp5kp" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.611314 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.612763 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.614595 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-c6cnq" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.625535 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.626789 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.629316 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ctbgz" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.630668 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.635619 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.656184 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.657093 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.658722 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-gw88f" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.697994 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.705547 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.708164 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.712301 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wwgqv" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.712982 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.718966 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.727918 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-xh68x" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.737861 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.750275 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.758322 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9psc\" (UniqueName: \"kubernetes.io/projected/182bf838-8aa5-488a-80cb-b0b5d45b8503-kube-api-access-d9psc\") pod \"barbican-operator-controller-manager-658bdf4b74-ktwj8\" (UID: \"182bf838-8aa5-488a-80cb-b0b5d45b8503\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.758384 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbmj\" (UniqueName: \"kubernetes.io/projected/e4f8d193-5863-4d60-9c5a-8596e64dcdc5-kube-api-access-5cbmj\") pod \"cinder-operator-controller-manager-7b7fb68549-tdlj7\" (UID: \"e4f8d193-5863-4d60-9c5a-8596e64dcdc5\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.758455 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk94h\" (UniqueName: \"kubernetes.io/projected/eb904271-90a2-4e14-8034-230236c0bbda-kube-api-access-hk94h\") pod \"designate-operator-controller-manager-85d5d9dd78-rzjp4\" (UID: \"eb904271-90a2-4e14-8034-230236c0bbda\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.759315 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.760619 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.764659 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bxlvs" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.770664 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.785383 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.786449 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.790826 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wsdtw" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.790968 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.821671 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.824539 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.824700 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.833416 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.836527 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-j6vn2" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.846381 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.847276 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.856128 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gsvgf" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.859372 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk94h\" (UniqueName: \"kubernetes.io/projected/eb904271-90a2-4e14-8034-230236c0bbda-kube-api-access-hk94h\") pod \"designate-operator-controller-manager-85d5d9dd78-rzjp4\" (UID: \"eb904271-90a2-4e14-8034-230236c0bbda\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.859412 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9psc\" (UniqueName: \"kubernetes.io/projected/182bf838-8aa5-488a-80cb-b0b5d45b8503-kube-api-access-d9psc\") pod \"barbican-operator-controller-manager-658bdf4b74-ktwj8\" (UID: \"182bf838-8aa5-488a-80cb-b0b5d45b8503\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.859468 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l858d\" (UniqueName: \"kubernetes.io/projected/88b4ba9d-c49f-47e2-aff8-c4e766d25bcc-kube-api-access-l858d\") pod \"horizon-operator-controller-manager-7ffbcb7588-c5sjr\" (UID: \"88b4ba9d-c49f-47e2-aff8-c4e766d25bcc\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.859493 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cbmj\" (UniqueName: \"kubernetes.io/projected/e4f8d193-5863-4d60-9c5a-8596e64dcdc5-kube-api-access-5cbmj\") pod \"cinder-operator-controller-manager-7b7fb68549-tdlj7\" (UID: \"e4f8d193-5863-4d60-9c5a-8596e64dcdc5\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.859510 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfzsq\" (UniqueName: \"kubernetes.io/projected/f25c3017-c824-47f3-add1-9bd90d8c6d69-kube-api-access-nfzsq\") pod \"glance-operator-controller-manager-84b9b84486-b6kh5\" (UID: \"f25c3017-c824-47f3-add1-9bd90d8c6d69\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.859525 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/871cbb71-8b23-4e59-958f-436fb42cf771-cert\") pod \"infra-operator-controller-manager-656bcbd775-8tmms\" (UID: \"871cbb71-8b23-4e59-958f-436fb42cf771\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.859568 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpbxs\" (UniqueName: \"kubernetes.io/projected/871cbb71-8b23-4e59-958f-436fb42cf771-kube-api-access-wpbxs\") pod \"infra-operator-controller-manager-656bcbd775-8tmms\" (UID: \"871cbb71-8b23-4e59-958f-436fb42cf771\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.859598 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k79sp\" (UniqueName: \"kubernetes.io/projected/d3e72aae-75a1-4af7-8db5-2962735f6e86-kube-api-access-k79sp\") pod \"heat-operator-controller-manager-858f76bbdd-8sbg9\" (UID: \"d3e72aae-75a1-4af7-8db5-2962735f6e86\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.862878 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.894693 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cbmj\" (UniqueName: \"kubernetes.io/projected/e4f8d193-5863-4d60-9c5a-8596e64dcdc5-kube-api-access-5cbmj\") pod \"cinder-operator-controller-manager-7b7fb68549-tdlj7\" (UID: \"e4f8d193-5863-4d60-9c5a-8596e64dcdc5\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.912991 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.914291 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.917137 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-szfqd" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.917195 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk94h\" (UniqueName: \"kubernetes.io/projected/eb904271-90a2-4e14-8034-230236c0bbda-kube-api-access-hk94h\") pod \"designate-operator-controller-manager-85d5d9dd78-rzjp4\" (UID: \"eb904271-90a2-4e14-8034-230236c0bbda\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.931059 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9psc\" (UniqueName: \"kubernetes.io/projected/182bf838-8aa5-488a-80cb-b0b5d45b8503-kube-api-access-d9psc\") pod \"barbican-operator-controller-manager-658bdf4b74-ktwj8\" (UID: \"182bf838-8aa5-488a-80cb-b0b5d45b8503\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.938142 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.958601 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.960506 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l858d\" (UniqueName: \"kubernetes.io/projected/88b4ba9d-c49f-47e2-aff8-c4e766d25bcc-kube-api-access-l858d\") pod \"horizon-operator-controller-manager-7ffbcb7588-c5sjr\" (UID: \"88b4ba9d-c49f-47e2-aff8-c4e766d25bcc\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.960566 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfzsq\" (UniqueName: \"kubernetes.io/projected/f25c3017-c824-47f3-add1-9bd90d8c6d69-kube-api-access-nfzsq\") pod \"glance-operator-controller-manager-84b9b84486-b6kh5\" (UID: \"f25c3017-c824-47f3-add1-9bd90d8c6d69\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.960582 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/871cbb71-8b23-4e59-958f-436fb42cf771-cert\") pod \"infra-operator-controller-manager-656bcbd775-8tmms\" (UID: \"871cbb71-8b23-4e59-958f-436fb42cf771\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.960603 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ctr\" (UniqueName: \"kubernetes.io/projected/b59421ab-44ca-48a0-945b-f6c42153397d-kube-api-access-g2ctr\") pod \"ironic-operator-controller-manager-9c5c78d49-2gj7w\" (UID: \"b59421ab-44ca-48a0-945b-f6c42153397d\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.960632 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbkf\" (UniqueName: \"kubernetes.io/projected/67131cbe-92c7-4051-a27b-1ed2527b6b04-kube-api-access-prbkf\") pod \"keystone-operator-controller-manager-55b6b7c7b8-sv4jh\" (UID: \"67131cbe-92c7-4051-a27b-1ed2527b6b04\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.960657 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpbxs\" (UniqueName: \"kubernetes.io/projected/871cbb71-8b23-4e59-958f-436fb42cf771-kube-api-access-wpbxs\") pod \"infra-operator-controller-manager-656bcbd775-8tmms\" (UID: \"871cbb71-8b23-4e59-958f-436fb42cf771\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.960677 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trxq5\" (UniqueName: \"kubernetes.io/projected/096b5116-c25a-4443-b8c0-af75f2b92262-kube-api-access-trxq5\") pod \"manila-operator-controller-manager-5f67fbc655-lsznt\" (UID: \"096b5116-c25a-4443-b8c0-af75f2b92262\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.960694 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k79sp\" (UniqueName: \"kubernetes.io/projected/d3e72aae-75a1-4af7-8db5-2962735f6e86-kube-api-access-k79sp\") pod \"heat-operator-controller-manager-858f76bbdd-8sbg9\" (UID: \"d3e72aae-75a1-4af7-8db5-2962735f6e86\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" Oct 14 07:04:12 crc kubenswrapper[5058]: E1014 07:04:12.961306 5058 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 14 07:04:12 crc kubenswrapper[5058]: E1014 07:04:12.961354 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/871cbb71-8b23-4e59-958f-436fb42cf771-cert podName:871cbb71-8b23-4e59-958f-436fb42cf771 nodeName:}" failed. No retries permitted until 2025-10-14 07:04:13.46133712 +0000 UTC m=+1001.372420926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/871cbb71-8b23-4e59-958f-436fb42cf771-cert") pod "infra-operator-controller-manager-656bcbd775-8tmms" (UID: "871cbb71-8b23-4e59-958f-436fb42cf771") : secret "infra-operator-webhook-server-cert" not found Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.975508 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.976530 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.980207 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6zvfg" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.983006 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.994849 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt"] Oct 14 07:04:12 crc kubenswrapper[5058]: I1014 07:04:12.995396 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpbxs\" (UniqueName: \"kubernetes.io/projected/871cbb71-8b23-4e59-958f-436fb42cf771-kube-api-access-wpbxs\") pod \"infra-operator-controller-manager-656bcbd775-8tmms\" (UID: \"871cbb71-8b23-4e59-958f-436fb42cf771\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.001343 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfzsq\" (UniqueName: \"kubernetes.io/projected/f25c3017-c824-47f3-add1-9bd90d8c6d69-kube-api-access-nfzsq\") pod \"glance-operator-controller-manager-84b9b84486-b6kh5\" (UID: \"f25c3017-c824-47f3-add1-9bd90d8c6d69\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.012212 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.014374 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l858d\" (UniqueName: \"kubernetes.io/projected/88b4ba9d-c49f-47e2-aff8-c4e766d25bcc-kube-api-access-l858d\") pod \"horizon-operator-controller-manager-7ffbcb7588-c5sjr\" (UID: \"88b4ba9d-c49f-47e2-aff8-c4e766d25bcc\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.016649 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k79sp\" (UniqueName: \"kubernetes.io/projected/d3e72aae-75a1-4af7-8db5-2962735f6e86-kube-api-access-k79sp\") pod \"heat-operator-controller-manager-858f76bbdd-8sbg9\" (UID: \"d3e72aae-75a1-4af7-8db5-2962735f6e86\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.017738 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.020508 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-h4vv4" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.030563 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.038429 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.049330 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.059986 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-hs54l"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.060947 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.061242 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.065335 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-xqc9l" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.066261 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.067930 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwk62\" (UniqueName: \"kubernetes.io/projected/156130c4-7aa0-4acd-b91b-d69ab4c86747-kube-api-access-nwk62\") pod \"neutron-operator-controller-manager-79d585cb66-j8qpq\" (UID: \"156130c4-7aa0-4acd-b91b-d69ab4c86747\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.068136 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2ctr\" (UniqueName: \"kubernetes.io/projected/b59421ab-44ca-48a0-945b-f6c42153397d-kube-api-access-g2ctr\") pod \"ironic-operator-controller-manager-9c5c78d49-2gj7w\" (UID: \"b59421ab-44ca-48a0-945b-f6c42153397d\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.069770 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prbkf\" (UniqueName: \"kubernetes.io/projected/67131cbe-92c7-4051-a27b-1ed2527b6b04-kube-api-access-prbkf\") pod \"keystone-operator-controller-manager-55b6b7c7b8-sv4jh\" (UID: \"67131cbe-92c7-4051-a27b-1ed2527b6b04\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.070050 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sq6q\" (UniqueName: \"kubernetes.io/projected/a005a6bf-bd52-4d64-9895-08889634f350-kube-api-access-2sq6q\") pod \"mariadb-operator-controller-manager-f9fb45f8f-p27j7\" (UID: \"a005a6bf-bd52-4d64-9895-08889634f350\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.070217 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trxq5\" (UniqueName: \"kubernetes.io/projected/096b5116-c25a-4443-b8c0-af75f2b92262-kube-api-access-trxq5\") pod \"manila-operator-controller-manager-5f67fbc655-lsznt\" (UID: \"096b5116-c25a-4443-b8c0-af75f2b92262\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.074417 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-hs54l"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.074524 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.096453 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4ljgx" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.099676 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prbkf\" (UniqueName: \"kubernetes.io/projected/67131cbe-92c7-4051-a27b-1ed2527b6b04-kube-api-access-prbkf\") pod \"keystone-operator-controller-manager-55b6b7c7b8-sv4jh\" (UID: \"67131cbe-92c7-4051-a27b-1ed2527b6b04\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.103076 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.106351 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trxq5\" (UniqueName: \"kubernetes.io/projected/096b5116-c25a-4443-b8c0-af75f2b92262-kube-api-access-trxq5\") pod \"manila-operator-controller-manager-5f67fbc655-lsznt\" (UID: \"096b5116-c25a-4443-b8c0-af75f2b92262\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.118689 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2ctr\" (UniqueName: \"kubernetes.io/projected/b59421ab-44ca-48a0-945b-f6c42153397d-kube-api-access-g2ctr\") pod \"ironic-operator-controller-manager-9c5c78d49-2gj7w\" (UID: \"b59421ab-44ca-48a0-945b-f6c42153397d\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.147023 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.154683 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.170351 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.174121 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.177319 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gspwg\" (UniqueName: \"kubernetes.io/projected/8d419868-4916-4ad0-bb8b-b75dec32fd07-kube-api-access-gspwg\") pod \"octavia-operator-controller-manager-69fdcfc5f5-nggc9\" (UID: \"8d419868-4916-4ad0-bb8b-b75dec32fd07\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.177412 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sq6q\" (UniqueName: \"kubernetes.io/projected/a005a6bf-bd52-4d64-9895-08889634f350-kube-api-access-2sq6q\") pod \"mariadb-operator-controller-manager-f9fb45f8f-p27j7\" (UID: \"a005a6bf-bd52-4d64-9895-08889634f350\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.177483 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwk62\" (UniqueName: \"kubernetes.io/projected/156130c4-7aa0-4acd-b91b-d69ab4c86747-kube-api-access-nwk62\") pod \"neutron-operator-controller-manager-79d585cb66-j8qpq\" (UID: \"156130c4-7aa0-4acd-b91b-d69ab4c86747\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.177733 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vfkc\" (UniqueName: \"kubernetes.io/projected/cf76dba6-ea6c-4e07-9261-0c960bbae828-kube-api-access-6vfkc\") pod \"nova-operator-controller-manager-5df598886f-hs54l\" (UID: \"cf76dba6-ea6c-4e07-9261-0c960bbae828\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.187389 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.189132 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.195437 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.195724 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gtzlt" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.223171 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sq6q\" (UniqueName: \"kubernetes.io/projected/a005a6bf-bd52-4d64-9895-08889634f350-kube-api-access-2sq6q\") pod \"mariadb-operator-controller-manager-f9fb45f8f-p27j7\" (UID: \"a005a6bf-bd52-4d64-9895-08889634f350\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.229838 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwk62\" (UniqueName: \"kubernetes.io/projected/156130c4-7aa0-4acd-b91b-d69ab4c86747-kube-api-access-nwk62\") pod \"neutron-operator-controller-manager-79d585cb66-j8qpq\" (UID: \"156130c4-7aa0-4acd-b91b-d69ab4c86747\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.234143 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.259092 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.261162 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.265309 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9tbxw" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.270148 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.278290 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.279860 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.282838 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-w5wgz" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.287573 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr9k2\" (UniqueName: \"kubernetes.io/projected/b5cb6101-1d0d-45af-9a55-09c33013c269-kube-api-access-xr9k2\") pod \"openstack-baremetal-operator-controller-manager-55b7d44848jxd9b\" (UID: \"b5cb6101-1d0d-45af-9a55-09c33013c269\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.287619 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gspwg\" (UniqueName: \"kubernetes.io/projected/8d419868-4916-4ad0-bb8b-b75dec32fd07-kube-api-access-gspwg\") pod \"octavia-operator-controller-manager-69fdcfc5f5-nggc9\" (UID: \"8d419868-4916-4ad0-bb8b-b75dec32fd07\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.287678 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5cb6101-1d0d-45af-9a55-09c33013c269-cert\") pod \"openstack-baremetal-operator-controller-manager-55b7d44848jxd9b\" (UID: \"b5cb6101-1d0d-45af-9a55-09c33013c269\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.287733 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vfkc\" (UniqueName: \"kubernetes.io/projected/cf76dba6-ea6c-4e07-9261-0c960bbae828-kube-api-access-6vfkc\") pod \"nova-operator-controller-manager-5df598886f-hs54l\" (UID: \"cf76dba6-ea6c-4e07-9261-0c960bbae828\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.292830 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.294721 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.302631 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qd5kq" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.303101 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.306599 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gspwg\" (UniqueName: \"kubernetes.io/projected/8d419868-4916-4ad0-bb8b-b75dec32fd07-kube-api-access-gspwg\") pod \"octavia-operator-controller-manager-69fdcfc5f5-nggc9\" (UID: \"8d419868-4916-4ad0-bb8b-b75dec32fd07\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.306933 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vfkc\" (UniqueName: \"kubernetes.io/projected/cf76dba6-ea6c-4e07-9261-0c960bbae828-kube-api-access-6vfkc\") pod \"nova-operator-controller-manager-5df598886f-hs54l\" (UID: \"cf76dba6-ea6c-4e07-9261-0c960bbae828\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.315005 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.328136 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.330311 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-g8q6c" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.335446 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.366379 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.383924 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.385698 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.388437 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dcl7w" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.388775 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcmb\" (UniqueName: \"kubernetes.io/projected/853f3dc8-d9ec-4220-8190-dd7f5789e3ef-kube-api-access-jfcmb\") pod \"placement-operator-controller-manager-68b6c87b68-nds65\" (UID: \"853f3dc8-d9ec-4220-8190-dd7f5789e3ef\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.388857 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr9k2\" (UniqueName: \"kubernetes.io/projected/b5cb6101-1d0d-45af-9a55-09c33013c269-kube-api-access-xr9k2\") pod \"openstack-baremetal-operator-controller-manager-55b7d44848jxd9b\" (UID: \"b5cb6101-1d0d-45af-9a55-09c33013c269\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.392312 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p94pr\" (UniqueName: \"kubernetes.io/projected/7dfea04c-ae28-4df3-addc-f14abc359748-kube-api-access-p94pr\") pod \"ovn-operator-controller-manager-79df5fb58c-mvrtg\" (UID: \"7dfea04c-ae28-4df3-addc-f14abc359748\") " pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.392347 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qgn\" (UniqueName: \"kubernetes.io/projected/a5666028-58d7-4a84-bdcb-4df88b8bbcfe-kube-api-access-n8qgn\") pod \"swift-operator-controller-manager-db6d7f97b-ndx29\" (UID: \"a5666028-58d7-4a84-bdcb-4df88b8bbcfe\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.392378 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5cb6101-1d0d-45af-9a55-09c33013c269-cert\") pod \"openstack-baremetal-operator-controller-manager-55b7d44848jxd9b\" (UID: \"b5cb6101-1d0d-45af-9a55-09c33013c269\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:13 crc kubenswrapper[5058]: E1014 07:04:13.392564 5058 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 07:04:13 crc kubenswrapper[5058]: E1014 07:04:13.392609 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5cb6101-1d0d-45af-9a55-09c33013c269-cert podName:b5cb6101-1d0d-45af-9a55-09c33013c269 nodeName:}" failed. No retries permitted until 2025-10-14 07:04:13.892594795 +0000 UTC m=+1001.803678601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b5cb6101-1d0d-45af-9a55-09c33013c269-cert") pod "openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" (UID: "b5cb6101-1d0d-45af-9a55-09c33013c269") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.420721 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.430707 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.432710 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.437658 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-g8p9g" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.444633 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr9k2\" (UniqueName: \"kubernetes.io/projected/b5cb6101-1d0d-45af-9a55-09c33013c269-kube-api-access-xr9k2\") pod \"openstack-baremetal-operator-controller-manager-55b7d44848jxd9b\" (UID: \"b5cb6101-1d0d-45af-9a55-09c33013c269\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.470569 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.494352 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p94pr\" (UniqueName: \"kubernetes.io/projected/7dfea04c-ae28-4df3-addc-f14abc359748-kube-api-access-p94pr\") pod \"ovn-operator-controller-manager-79df5fb58c-mvrtg\" (UID: \"7dfea04c-ae28-4df3-addc-f14abc359748\") " pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.494389 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8qgn\" (UniqueName: \"kubernetes.io/projected/a5666028-58d7-4a84-bdcb-4df88b8bbcfe-kube-api-access-n8qgn\") pod \"swift-operator-controller-manager-db6d7f97b-ndx29\" (UID: \"a5666028-58d7-4a84-bdcb-4df88b8bbcfe\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.494427 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6sh\" (UniqueName: \"kubernetes.io/projected/287a5a36-ab23-48aa-bce3-c404703271d0-kube-api-access-xg6sh\") pod \"telemetry-operator-controller-manager-67cfc6749b-gstgk\" (UID: \"287a5a36-ab23-48aa-bce3-c404703271d0\") " pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.494475 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfcmb\" (UniqueName: \"kubernetes.io/projected/853f3dc8-d9ec-4220-8190-dd7f5789e3ef-kube-api-access-jfcmb\") pod \"placement-operator-controller-manager-68b6c87b68-nds65\" (UID: \"853f3dc8-d9ec-4220-8190-dd7f5789e3ef\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.494510 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/871cbb71-8b23-4e59-958f-436fb42cf771-cert\") pod \"infra-operator-controller-manager-656bcbd775-8tmms\" (UID: \"871cbb71-8b23-4e59-958f-436fb42cf771\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.494528 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdtsj\" (UniqueName: \"kubernetes.io/projected/0285fda4-68ce-4cf0-bb77-50d2e3a66950-kube-api-access-xdtsj\") pod \"test-operator-controller-manager-5458f77c4-2fm9v\" (UID: \"0285fda4-68ce-4cf0-bb77-50d2e3a66950\") " pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.496830 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.507990 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/871cbb71-8b23-4e59-958f-436fb42cf771-cert\") pod \"infra-operator-controller-manager-656bcbd775-8tmms\" (UID: \"871cbb71-8b23-4e59-958f-436fb42cf771\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.508693 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.509849 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.512100 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xxrx9" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.512394 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.515694 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfcmb\" (UniqueName: \"kubernetes.io/projected/853f3dc8-d9ec-4220-8190-dd7f5789e3ef-kube-api-access-jfcmb\") pod \"placement-operator-controller-manager-68b6c87b68-nds65\" (UID: \"853f3dc8-d9ec-4220-8190-dd7f5789e3ef\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.518001 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.520191 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p94pr\" (UniqueName: \"kubernetes.io/projected/7dfea04c-ae28-4df3-addc-f14abc359748-kube-api-access-p94pr\") pod \"ovn-operator-controller-manager-79df5fb58c-mvrtg\" (UID: \"7dfea04c-ae28-4df3-addc-f14abc359748\") " pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.520486 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.525122 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8qgn\" (UniqueName: \"kubernetes.io/projected/a5666028-58d7-4a84-bdcb-4df88b8bbcfe-kube-api-access-n8qgn\") pod \"swift-operator-controller-manager-db6d7f97b-ndx29\" (UID: \"a5666028-58d7-4a84-bdcb-4df88b8bbcfe\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.543116 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.544005 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.547498 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-2wfc9" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.555124 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.559655 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.571910 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.591164 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.596837 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0d91fa4-f695-4d4d-8a83-22fb8d5a2660-cert\") pod \"openstack-operator-controller-manager-7fb8c88b76-t9bzz\" (UID: \"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660\") " pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.596926 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6sh\" (UniqueName: \"kubernetes.io/projected/287a5a36-ab23-48aa-bce3-c404703271d0-kube-api-access-xg6sh\") pod \"telemetry-operator-controller-manager-67cfc6749b-gstgk\" (UID: \"287a5a36-ab23-48aa-bce3-c404703271d0\") " pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.597049 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj5s9\" (UniqueName: \"kubernetes.io/projected/b0d91fa4-f695-4d4d-8a83-22fb8d5a2660-kube-api-access-zj5s9\") pod \"openstack-operator-controller-manager-7fb8c88b76-t9bzz\" (UID: \"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660\") " pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.597138 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4j6\" (UniqueName: \"kubernetes.io/projected/f40617a5-db6e-429b-8665-58fa79067b23-kube-api-access-qp4j6\") pod \"watcher-operator-controller-manager-7f554bff7b-2gfn7\" (UID: \"f40617a5-db6e-429b-8665-58fa79067b23\") " pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.597171 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdtsj\" (UniqueName: \"kubernetes.io/projected/0285fda4-68ce-4cf0-bb77-50d2e3a66950-kube-api-access-xdtsj\") pod \"test-operator-controller-manager-5458f77c4-2fm9v\" (UID: \"0285fda4-68ce-4cf0-bb77-50d2e3a66950\") " pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.609605 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.619623 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6sh\" (UniqueName: \"kubernetes.io/projected/287a5a36-ab23-48aa-bce3-c404703271d0-kube-api-access-xg6sh\") pod \"telemetry-operator-controller-manager-67cfc6749b-gstgk\" (UID: \"287a5a36-ab23-48aa-bce3-c404703271d0\") " pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.621135 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdtsj\" (UniqueName: \"kubernetes.io/projected/0285fda4-68ce-4cf0-bb77-50d2e3a66950-kube-api-access-xdtsj\") pod \"test-operator-controller-manager-5458f77c4-2fm9v\" (UID: \"0285fda4-68ce-4cf0-bb77-50d2e3a66950\") " pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.626847 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.642348 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.676172 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.698037 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5s9\" (UniqueName: \"kubernetes.io/projected/b0d91fa4-f695-4d4d-8a83-22fb8d5a2660-kube-api-access-zj5s9\") pod \"openstack-operator-controller-manager-7fb8c88b76-t9bzz\" (UID: \"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660\") " pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.698125 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp4j6\" (UniqueName: \"kubernetes.io/projected/f40617a5-db6e-429b-8665-58fa79067b23-kube-api-access-qp4j6\") pod \"watcher-operator-controller-manager-7f554bff7b-2gfn7\" (UID: \"f40617a5-db6e-429b-8665-58fa79067b23\") " pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.698194 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbnk9\" (UniqueName: \"kubernetes.io/projected/4b8f12d6-84f1-4095-817a-bbbfc74bb384-kube-api-access-dbnk9\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c\" (UID: \"4b8f12d6-84f1-4095-817a-bbbfc74bb384\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.698240 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0d91fa4-f695-4d4d-8a83-22fb8d5a2660-cert\") pod \"openstack-operator-controller-manager-7fb8c88b76-t9bzz\" (UID: \"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660\") " pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:13 crc kubenswrapper[5058]: E1014 07:04:13.698408 5058 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 14 07:04:13 crc kubenswrapper[5058]: E1014 07:04:13.698464 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d91fa4-f695-4d4d-8a83-22fb8d5a2660-cert podName:b0d91fa4-f695-4d4d-8a83-22fb8d5a2660 nodeName:}" failed. No retries permitted until 2025-10-14 07:04:14.19844525 +0000 UTC m=+1002.109529066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b0d91fa4-f695-4d4d-8a83-22fb8d5a2660-cert") pod "openstack-operator-controller-manager-7fb8c88b76-t9bzz" (UID: "b0d91fa4-f695-4d4d-8a83-22fb8d5a2660") : secret "webhook-server-cert" not found Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.706505 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.715887 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj5s9\" (UniqueName: \"kubernetes.io/projected/b0d91fa4-f695-4d4d-8a83-22fb8d5a2660-kube-api-access-zj5s9\") pod \"openstack-operator-controller-manager-7fb8c88b76-t9bzz\" (UID: \"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660\") " pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.723762 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp4j6\" (UniqueName: \"kubernetes.io/projected/f40617a5-db6e-429b-8665-58fa79067b23-kube-api-access-qp4j6\") pod \"watcher-operator-controller-manager-7f554bff7b-2gfn7\" (UID: \"f40617a5-db6e-429b-8665-58fa79067b23\") " pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.730596 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.768894 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.799380 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbnk9\" (UniqueName: \"kubernetes.io/projected/4b8f12d6-84f1-4095-817a-bbbfc74bb384-kube-api-access-dbnk9\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c\" (UID: \"4b8f12d6-84f1-4095-817a-bbbfc74bb384\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.800163 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.819169 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbnk9\" (UniqueName: \"kubernetes.io/projected/4b8f12d6-84f1-4095-817a-bbbfc74bb384-kube-api-access-dbnk9\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c\" (UID: \"4b8f12d6-84f1-4095-817a-bbbfc74bb384\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.900584 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5cb6101-1d0d-45af-9a55-09c33013c269-cert\") pod \"openstack-baremetal-operator-controller-manager-55b7d44848jxd9b\" (UID: \"b5cb6101-1d0d-45af-9a55-09c33013c269\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.911481 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5cb6101-1d0d-45af-9a55-09c33013c269-cert\") pod \"openstack-baremetal-operator-controller-manager-55b7d44848jxd9b\" (UID: \"b5cb6101-1d0d-45af-9a55-09c33013c269\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.949256 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.955891 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.959883 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4"] Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.997601 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" event={"ID":"e4f8d193-5863-4d60-9c5a-8596e64dcdc5","Type":"ContainerStarted","Data":"870ce8e40a8cddc6eea1ef785114dff4db6a43ac1014f1aaeff0b61159e89233"} Oct 14 07:04:13 crc kubenswrapper[5058]: I1014 07:04:13.998778 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" event={"ID":"182bf838-8aa5-488a-80cb-b0b5d45b8503","Type":"ContainerStarted","Data":"c22d1c2506b688c75b3f217a93402132db84970c5b9b112fd4ca554ba61a2588"} Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.046751 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3e72aae_75a1_4af7_8db5_2962735f6e86.slice/crio-fe8f632eaa7be953273b02ab5f1ffbd0bfc2e902d44b8c73741401b7629e81af WatchSource:0}: Error finding container fe8f632eaa7be953273b02ab5f1ffbd0bfc2e902d44b8c73741401b7629e81af: Status 404 returned error can't find the container with id fe8f632eaa7be953273b02ab5f1ffbd0bfc2e902d44b8c73741401b7629e81af Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.178776 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.207242 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0d91fa4-f695-4d4d-8a83-22fb8d5a2660-cert\") pod \"openstack-operator-controller-manager-7fb8c88b76-t9bzz\" (UID: \"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660\") " pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.212643 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b0d91fa4-f695-4d4d-8a83-22fb8d5a2660-cert\") pod \"openstack-operator-controller-manager-7fb8c88b76-t9bzz\" (UID: \"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660\") " pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.252071 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.394711 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr"] Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.402018 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88b4ba9d_c49f_47e2_aff8_c4e766d25bcc.slice/crio-08adcb4cf9ed31134288ce6ac6cb8249f1644ef427c8e7848907bee364fc4440 WatchSource:0}: Error finding container 08adcb4cf9ed31134288ce6ac6cb8249f1644ef427c8e7848907bee364fc4440: Status 404 returned error can't find the container with id 08adcb4cf9ed31134288ce6ac6cb8249f1644ef427c8e7848907bee364fc4440 Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.425253 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.428043 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.434828 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5"] Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.435225 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod096b5116_c25a_4443_b8c0_af75f2b92262.slice/crio-027cd9d682e62308170d8067c3db6a2d285db62c95ec38ac70a55b6244f88f7e WatchSource:0}: Error finding container 027cd9d682e62308170d8067c3db6a2d285db62c95ec38ac70a55b6244f88f7e: Status 404 returned error can't find the container with id 027cd9d682e62308170d8067c3db6a2d285db62c95ec38ac70a55b6244f88f7e Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.455973 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.740871 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.760462 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.768390 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg"] Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.773381 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0285fda4_68ce_4cf0_bb77_50d2e3a66950.slice/crio-b2410ec83d42d08b1f1a7212d23c29c3e7c8d6bc72af541c958eb32044aa8518 WatchSource:0}: Error finding container b2410ec83d42d08b1f1a7212d23c29c3e7c8d6bc72af541c958eb32044aa8518: Status 404 returned error can't find the container with id b2410ec83d42d08b1f1a7212d23c29c3e7c8d6bc72af541c958eb32044aa8518 Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.782003 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65"] Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.783957 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda005a6bf_bd52_4d64_9895_08889634f350.slice/crio-4647b679e34f79f09d8c9312b88bc8d02a40ad012bf9c09642d8147dc7ca5fc7 WatchSource:0}: Error finding container 4647b679e34f79f09d8c9312b88bc8d02a40ad012bf9c09642d8147dc7ca5fc7: Status 404 returned error can't find the container with id 4647b679e34f79f09d8c9312b88bc8d02a40ad012bf9c09642d8147dc7ca5fc7 Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.817446 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.817492 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.817503 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.817512 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.823389 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms"] Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.827731 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk"] Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.829247 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d419868_4916_4ad0_bb8b_b75dec32fd07.slice/crio-d9a9b1c52e47b4d00fbf46c1b3121b7132178b8f6c24cec818eeb94215470f49 WatchSource:0}: Error finding container d9a9b1c52e47b4d00fbf46c1b3121b7132178b8f6c24cec818eeb94215470f49: Status 404 returned error can't find the container with id d9a9b1c52e47b4d00fbf46c1b3121b7132178b8f6c24cec818eeb94215470f49 Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.833326 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod156130c4_7aa0_4acd_b91b_d69ab4c86747.slice/crio-dc548cbdd1ac5980cdd7fe345ecb1bb478a1856561f997f0a067691f93dafb17 WatchSource:0}: Error finding container dc548cbdd1ac5980cdd7fe345ecb1bb478a1856561f997f0a067691f93dafb17: Status 404 returned error can't find the container with id dc548cbdd1ac5980cdd7fe345ecb1bb478a1856561f997f0a067691f93dafb17 Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.834651 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5666028_58d7_4a84_bdcb_4df88b8bbcfe.slice/crio-db8e49d22496f4dc9706089d6fa6a234e59e32000cfaa86fb4a9018a9c582b35 WatchSource:0}: Error finding container db8e49d22496f4dc9706089d6fa6a234e59e32000cfaa86fb4a9018a9c582b35: Status 404 returned error can't find the container with id db8e49d22496f4dc9706089d6fa6a234e59e32000cfaa86fb4a9018a9c582b35 Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.835638 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9"] Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.838883 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf76dba6_ea6c_4e07_9261_0c960bbae828.slice/crio-b30718d5cc99bd2d9481811f638834950777fcf59457fc376309b8d5676b68a6 WatchSource:0}: Error finding container b30718d5cc99bd2d9481811f638834950777fcf59457fc376309b8d5676b68a6: Status 404 returned error can't find the container with id b30718d5cc99bd2d9481811f638834950777fcf59457fc376309b8d5676b68a6 Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.840228 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-hs54l"] Oct 14 07:04:14 crc kubenswrapper[5058]: E1014 07:04:14.840466 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8qgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-db6d7f97b-ndx29_openstack-operators(a5666028-58d7-4a84-bdcb-4df88b8bbcfe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 07:04:14 crc kubenswrapper[5058]: E1014 07:04:14.854397 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwk62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-79d585cb66-j8qpq_openstack-operators(156130c4-7aa0-4acd-b91b-d69ab4c86747): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 07:04:14 crc kubenswrapper[5058]: E1014 07:04:14.854415 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gspwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69fdcfc5f5-nggc9_openstack-operators(8d419868-4916-4ad0-bb8b-b75dec32fd07): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 07:04:14 crc kubenswrapper[5058]: E1014 07:04:14.855618 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbnk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c_openstack-operators(4b8f12d6-84f1-4095-817a-bbbfc74bb384): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 07:04:14 crc kubenswrapper[5058]: E1014 07:04:14.857525 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" podUID="4b8f12d6-84f1-4095-817a-bbbfc74bb384" Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.860077 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b"] Oct 14 07:04:14 crc kubenswrapper[5058]: E1014 07:04:14.865412 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vfkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5df598886f-hs54l_openstack-operators(cf76dba6-ea6c-4e07-9261-0c960bbae828): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 07:04:14 crc kubenswrapper[5058]: E1014 07:04:14.866245 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xg6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-67cfc6749b-gstgk_openstack-operators(287a5a36-ab23-48aa-bce3-c404703271d0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 07:04:14 crc kubenswrapper[5058]: W1014 07:04:14.868348 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5cb6101_1d0d_45af_9a55_09c33013c269.slice/crio-dc0215115593e54d090b23d4f351519e7a5f16bb44705ce1cd67126a6f71b71b WatchSource:0}: Error finding container dc0215115593e54d090b23d4f351519e7a5f16bb44705ce1cd67126a6f71b71b: Status 404 returned error can't find the container with id dc0215115593e54d090b23d4f351519e7a5f16bb44705ce1cd67126a6f71b71b Oct 14 07:04:14 crc kubenswrapper[5058]: I1014 07:04:14.879408 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz"] Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.034900 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" event={"ID":"853f3dc8-d9ec-4220-8190-dd7f5789e3ef","Type":"ContainerStarted","Data":"099ddf4beeee17a153981e25e9472d078783ed16b57859c5c18f662b46b6aca6"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.036059 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" event={"ID":"88b4ba9d-c49f-47e2-aff8-c4e766d25bcc","Type":"ContainerStarted","Data":"08adcb4cf9ed31134288ce6ac6cb8249f1644ef427c8e7848907bee364fc4440"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.038780 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" event={"ID":"a005a6bf-bd52-4d64-9895-08889634f350","Type":"ContainerStarted","Data":"4647b679e34f79f09d8c9312b88bc8d02a40ad012bf9c09642d8147dc7ca5fc7"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.040571 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" event={"ID":"871cbb71-8b23-4e59-958f-436fb42cf771","Type":"ContainerStarted","Data":"2a7269597bb9536bc5f6500d3cd393c2fdf0400c3d13a6849408f9d4be325c2c"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.041331 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" event={"ID":"096b5116-c25a-4443-b8c0-af75f2b92262","Type":"ContainerStarted","Data":"027cd9d682e62308170d8067c3db6a2d285db62c95ec38ac70a55b6244f88f7e"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.041962 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" event={"ID":"156130c4-7aa0-4acd-b91b-d69ab4c86747","Type":"ContainerStarted","Data":"dc548cbdd1ac5980cdd7fe345ecb1bb478a1856561f997f0a067691f93dafb17"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.042778 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" event={"ID":"287a5a36-ab23-48aa-bce3-c404703271d0","Type":"ContainerStarted","Data":"30e5618cbec601403c553caca2517e898d6427c21a36edfaa1ef5d935e8e42e3"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.043480 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" event={"ID":"d3e72aae-75a1-4af7-8db5-2962735f6e86","Type":"ContainerStarted","Data":"fe8f632eaa7be953273b02ab5f1ffbd0bfc2e902d44b8c73741401b7629e81af"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.044391 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" event={"ID":"b5cb6101-1d0d-45af-9a55-09c33013c269","Type":"ContainerStarted","Data":"dc0215115593e54d090b23d4f351519e7a5f16bb44705ce1cd67126a6f71b71b"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.063931 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" event={"ID":"f25c3017-c824-47f3-add1-9bd90d8c6d69","Type":"ContainerStarted","Data":"3cd2af9a88d3a628a9f738fe437e4d9468a654358690c67f7b402fc92a6c496c"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.074408 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" event={"ID":"a5666028-58d7-4a84-bdcb-4df88b8bbcfe","Type":"ContainerStarted","Data":"db8e49d22496f4dc9706089d6fa6a234e59e32000cfaa86fb4a9018a9c582b35"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.077615 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" event={"ID":"8d419868-4916-4ad0-bb8b-b75dec32fd07","Type":"ContainerStarted","Data":"d9a9b1c52e47b4d00fbf46c1b3121b7132178b8f6c24cec818eeb94215470f49"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.080209 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" event={"ID":"cf76dba6-ea6c-4e07-9261-0c960bbae828","Type":"ContainerStarted","Data":"b30718d5cc99bd2d9481811f638834950777fcf59457fc376309b8d5676b68a6"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.082578 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" event={"ID":"4b8f12d6-84f1-4095-817a-bbbfc74bb384","Type":"ContainerStarted","Data":"a885b90e3bee03c03cee4a8ed5569dead8d3fc87efa5ce2faec09d947dcca149"} Oct 14 07:04:15 crc kubenswrapper[5058]: E1014 07:04:15.085024 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" podUID="4b8f12d6-84f1-4095-817a-bbbfc74bb384" Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.087785 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" event={"ID":"67131cbe-92c7-4051-a27b-1ed2527b6b04","Type":"ContainerStarted","Data":"ea1c8a313c645d09c33e57d642a6e81c448aabd8e5e06802e42c76482c81686c"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.090440 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" event={"ID":"f40617a5-db6e-429b-8665-58fa79067b23","Type":"ContainerStarted","Data":"14f455399a19ad7c8522d7076f1c95c556195288c91696d5977509681b27e183"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.092851 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" event={"ID":"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660","Type":"ContainerStarted","Data":"c84fe80bdc5a82a01fddb85b5a78dba70304cdc1ae0af99011223a7f617dd198"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.094538 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" event={"ID":"0285fda4-68ce-4cf0-bb77-50d2e3a66950","Type":"ContainerStarted","Data":"b2410ec83d42d08b1f1a7212d23c29c3e7c8d6bc72af541c958eb32044aa8518"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.100039 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" event={"ID":"7dfea04c-ae28-4df3-addc-f14abc359748","Type":"ContainerStarted","Data":"26fde5497077fbd0b2b6f85b9acdf840e6e13943e1c68933a71b1cbc32fd6bbf"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.101945 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" event={"ID":"b59421ab-44ca-48a0-945b-f6c42153397d","Type":"ContainerStarted","Data":"4c582ebeb51a2f418c95f52c4871647235020d6b8aa08348cb3338857558fd8f"} Oct 14 07:04:15 crc kubenswrapper[5058]: I1014 07:04:15.103775 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" event={"ID":"eb904271-90a2-4e14-8034-230236c0bbda","Type":"ContainerStarted","Data":"4da05cecdc07fcd1fff9a512d43930f2e98868204d6090a894b3050599bd91dc"} Oct 14 07:04:15 crc kubenswrapper[5058]: E1014 07:04:15.182529 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" podUID="156130c4-7aa0-4acd-b91b-d69ab4c86747" Oct 14 07:04:15 crc kubenswrapper[5058]: E1014 07:04:15.213075 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" podUID="cf76dba6-ea6c-4e07-9261-0c960bbae828" Oct 14 07:04:15 crc kubenswrapper[5058]: E1014 07:04:15.224182 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" podUID="8d419868-4916-4ad0-bb8b-b75dec32fd07" Oct 14 07:04:15 crc kubenswrapper[5058]: E1014 07:04:15.225875 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" podUID="287a5a36-ab23-48aa-bce3-c404703271d0" Oct 14 07:04:15 crc kubenswrapper[5058]: E1014 07:04:15.248546 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" podUID="a5666028-58d7-4a84-bdcb-4df88b8bbcfe" Oct 14 07:04:16 crc kubenswrapper[5058]: I1014 07:04:16.121431 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" event={"ID":"a5666028-58d7-4a84-bdcb-4df88b8bbcfe","Type":"ContainerStarted","Data":"a5e675b0a00b1a6ecb3c84bf1ab9502dfd27a24faf12c2d64ad111d1df21edc2"} Oct 14 07:04:16 crc kubenswrapper[5058]: E1014 07:04:16.123079 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" podUID="a5666028-58d7-4a84-bdcb-4df88b8bbcfe" Oct 14 07:04:16 crc kubenswrapper[5058]: I1014 07:04:16.138114 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" event={"ID":"8d419868-4916-4ad0-bb8b-b75dec32fd07","Type":"ContainerStarted","Data":"7ddfdc35ec94e0a9fa3a57bbac18171ea4f39814e9447a5eb998508b9ec7494f"} Oct 14 07:04:16 crc kubenswrapper[5058]: E1014 07:04:16.142180 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" podUID="8d419868-4916-4ad0-bb8b-b75dec32fd07" Oct 14 07:04:16 crc kubenswrapper[5058]: I1014 07:04:16.152849 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" event={"ID":"cf76dba6-ea6c-4e07-9261-0c960bbae828","Type":"ContainerStarted","Data":"309271a4ed9ded1ff30da60d58387e1761ed723731d78e411ded655453c1d4ec"} Oct 14 07:04:16 crc kubenswrapper[5058]: E1014 07:04:16.154685 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" podUID="cf76dba6-ea6c-4e07-9261-0c960bbae828" Oct 14 07:04:16 crc kubenswrapper[5058]: I1014 07:04:16.158126 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" event={"ID":"287a5a36-ab23-48aa-bce3-c404703271d0","Type":"ContainerStarted","Data":"37f23cf7504e7697b6cd5606ad246f0b950a5a9a9e51e997f8ef82c3a0c91396"} Oct 14 07:04:16 crc kubenswrapper[5058]: E1014 07:04:16.161439 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" podUID="287a5a36-ab23-48aa-bce3-c404703271d0" Oct 14 07:04:16 crc kubenswrapper[5058]: I1014 07:04:16.164981 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" event={"ID":"156130c4-7aa0-4acd-b91b-d69ab4c86747","Type":"ContainerStarted","Data":"304b2066cf4d5b0ae653bf2f741d0ad2b9915329322bda1a8504de96435a1a7a"} Oct 14 07:04:16 crc kubenswrapper[5058]: E1014 07:04:16.166883 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" podUID="156130c4-7aa0-4acd-b91b-d69ab4c86747" Oct 14 07:04:16 crc kubenswrapper[5058]: I1014 07:04:16.170692 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" event={"ID":"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660","Type":"ContainerStarted","Data":"dd201de986b968c6fe1f5b104ba34f0b1783c3525f674a70d7181f974228bf04"} Oct 14 07:04:16 crc kubenswrapper[5058]: I1014 07:04:16.170734 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" event={"ID":"b0d91fa4-f695-4d4d-8a83-22fb8d5a2660","Type":"ContainerStarted","Data":"1337710a8c88b271954c20b010bf01ce3ac15e79874935baa2ec295bf7dd1bec"} Oct 14 07:04:16 crc kubenswrapper[5058]: E1014 07:04:16.172680 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" podUID="4b8f12d6-84f1-4095-817a-bbbfc74bb384" Oct 14 07:04:16 crc kubenswrapper[5058]: I1014 07:04:16.261743 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" podStartSLOduration=3.2617278880000002 podStartE2EDuration="3.261727888s" podCreationTimestamp="2025-10-14 07:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:04:16.2417671 +0000 UTC m=+1004.152850906" watchObservedRunningTime="2025-10-14 07:04:16.261727888 +0000 UTC m=+1004.172811694" Oct 14 07:04:17 crc kubenswrapper[5058]: I1014 07:04:17.178883 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:17 crc kubenswrapper[5058]: E1014 07:04:17.180181 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" podUID="287a5a36-ab23-48aa-bce3-c404703271d0" Oct 14 07:04:17 crc kubenswrapper[5058]: E1014 07:04:17.181603 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" podUID="8d419868-4916-4ad0-bb8b-b75dec32fd07" Oct 14 07:04:17 crc kubenswrapper[5058]: E1014 07:04:17.181644 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" podUID="cf76dba6-ea6c-4e07-9261-0c960bbae828" Oct 14 07:04:17 crc kubenswrapper[5058]: E1014 07:04:17.181713 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" podUID="a5666028-58d7-4a84-bdcb-4df88b8bbcfe" Oct 14 07:04:17 crc kubenswrapper[5058]: E1014 07:04:17.182692 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" podUID="156130c4-7aa0-4acd-b91b-d69ab4c86747" Oct 14 07:04:24 crc kubenswrapper[5058]: I1014 07:04:24.258089 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7fb8c88b76-t9bzz" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.292608 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" event={"ID":"b5cb6101-1d0d-45af-9a55-09c33013c269","Type":"ContainerStarted","Data":"fb7f3d105800a40361a8a459625988096985cb4d5fc86cae16f8dcc8d04edfd7"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.296530 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" event={"ID":"88b4ba9d-c49f-47e2-aff8-c4e766d25bcc","Type":"ContainerStarted","Data":"394b68bbe01a07b8d107f3674171147d1c1d0a9ecc0d4092533c3e615aa5b21b"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.297949 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" event={"ID":"b59421ab-44ca-48a0-945b-f6c42153397d","Type":"ContainerStarted","Data":"0cf0215d1bfc98ea69d5395c6fc67daef394d4dfd0ffe74afc79edf47f77bc5d"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.304134 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" event={"ID":"f25c3017-c824-47f3-add1-9bd90d8c6d69","Type":"ContainerStarted","Data":"1b15f37d0e2a8985c4ca90391bc8571165d3725d00c366d5a96bfc2ffa37e067"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.304168 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" event={"ID":"f25c3017-c824-47f3-add1-9bd90d8c6d69","Type":"ContainerStarted","Data":"299d0c9478731bffa1d9b65df2849af4ac9599fd1d1fa51bd0040b1b0ed046dd"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.304262 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.309428 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" event={"ID":"0285fda4-68ce-4cf0-bb77-50d2e3a66950","Type":"ContainerStarted","Data":"c648aab56cb2d229934ff546dbd64608f5d5aac0ba83a4c3433aaee6327b1712"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.310976 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" event={"ID":"67131cbe-92c7-4051-a27b-1ed2527b6b04","Type":"ContainerStarted","Data":"9536d88dda5c17ef42a6d1e24af46a05402bc2ceddc802f7e77a32c2f706ebd3"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.310999 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" event={"ID":"67131cbe-92c7-4051-a27b-1ed2527b6b04","Type":"ContainerStarted","Data":"18a7a1af88a6d6cc79b9326a3fbdbe852787cde85823f839c9eda04d46d82da7"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.311589 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.315000 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" event={"ID":"f40617a5-db6e-429b-8665-58fa79067b23","Type":"ContainerStarted","Data":"440ead7f429816a0b1dba3896c07436f0b77229b6a2bc221446ffcb3b1ef5a1b"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.316113 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" event={"ID":"871cbb71-8b23-4e59-958f-436fb42cf771","Type":"ContainerStarted","Data":"506487abf42f90a2e8812a9ebb3c9041533cf5f96ac8dcab41eb2210c6ae84b4"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.325011 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" podStartSLOduration=3.8162108569999997 podStartE2EDuration="15.32499692s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.444261747 +0000 UTC m=+1002.355345553" lastFinishedPulling="2025-10-14 07:04:25.9530478 +0000 UTC m=+1013.864131616" observedRunningTime="2025-10-14 07:04:27.323831877 +0000 UTC m=+1015.234915693" watchObservedRunningTime="2025-10-14 07:04:27.32499692 +0000 UTC m=+1015.236080726" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.326758 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" event={"ID":"e4f8d193-5863-4d60-9c5a-8596e64dcdc5","Type":"ContainerStarted","Data":"65991ab38daa387d8fdfa8cade02fc90acf7ceaeb05d73bc8d806b8b11adb85c"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.326822 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.326832 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" event={"ID":"e4f8d193-5863-4d60-9c5a-8596e64dcdc5","Type":"ContainerStarted","Data":"58867aee8eaee677442c6d16b949fb3862854c5bc9258c973622e6acfa05ac48"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.333966 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" event={"ID":"d3e72aae-75a1-4af7-8db5-2962735f6e86","Type":"ContainerStarted","Data":"bf2c43a4078c69c7ee37f1000b6e248b4e109f6eb372917a73de1667d22fcd7b"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.336201 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" event={"ID":"853f3dc8-d9ec-4220-8190-dd7f5789e3ef","Type":"ContainerStarted","Data":"f7a2f69c8295eed20a9447c0eaab9d32f553a8aec8f56e66bace3dadb866ffc6"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.341774 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" event={"ID":"a005a6bf-bd52-4d64-9895-08889634f350","Type":"ContainerStarted","Data":"56ea426d776a0b57cdeb56f00fde8d1e5f821e8f6b05ae199fdff524f1b91736"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.341859 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" event={"ID":"a005a6bf-bd52-4d64-9895-08889634f350","Type":"ContainerStarted","Data":"791c39db1c4bb7e35496289003bb7e39ac2c8a6de0a5cacf3243eee426a18d30"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.342429 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.344309 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" podStartSLOduration=3.9089507770000003 podStartE2EDuration="15.34429208s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.47738074 +0000 UTC m=+1002.388464546" lastFinishedPulling="2025-10-14 07:04:25.912722003 +0000 UTC m=+1013.823805849" observedRunningTime="2025-10-14 07:04:27.343032374 +0000 UTC m=+1015.254116180" watchObservedRunningTime="2025-10-14 07:04:27.34429208 +0000 UTC m=+1015.255375886" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.346347 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" event={"ID":"7dfea04c-ae28-4df3-addc-f14abc359748","Type":"ContainerStarted","Data":"6f78510c8a4927d9c1dba8fdfb8b64f1650add1b3438d520e2fd634e5ec667c1"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.348672 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" event={"ID":"096b5116-c25a-4443-b8c0-af75f2b92262","Type":"ContainerStarted","Data":"a92a46cb7c1c9132b32a10e9228428c76bdff24d64fe1e196ad5d3138508038e"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.356609 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" event={"ID":"eb904271-90a2-4e14-8034-230236c0bbda","Type":"ContainerStarted","Data":"09e9c74f86366437b8de1bc92143b20ca0b162d9244cdf9f0eb0c90545e250c2"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.356639 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" event={"ID":"eb904271-90a2-4e14-8034-230236c0bbda","Type":"ContainerStarted","Data":"83a66050b0acb50f99fe3fa18f010af69d082f542493c21bbc53ad58e532855c"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.357401 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.358683 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" event={"ID":"182bf838-8aa5-488a-80cb-b0b5d45b8503","Type":"ContainerStarted","Data":"54078b021a170fad460c4af2f0ba4d5ac1ac608ebb356a13554b90f3c18b5d77"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.358704 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" event={"ID":"182bf838-8aa5-488a-80cb-b0b5d45b8503","Type":"ContainerStarted","Data":"b9876070b1bdd9871955eb6eae2ff421418e2691416bb945c7797eb2670b3cba"} Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.359063 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.367385 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" podStartSLOduration=3.2849572670000002 podStartE2EDuration="15.367374046s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:13.823545721 +0000 UTC m=+1001.734629527" lastFinishedPulling="2025-10-14 07:04:25.90596246 +0000 UTC m=+1013.817046306" observedRunningTime="2025-10-14 07:04:27.363158227 +0000 UTC m=+1015.274242033" watchObservedRunningTime="2025-10-14 07:04:27.367374046 +0000 UTC m=+1015.278457852" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.388091 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" podStartSLOduration=4.252366432 podStartE2EDuration="15.388074786s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.805322314 +0000 UTC m=+1002.716406120" lastFinishedPulling="2025-10-14 07:04:25.941030668 +0000 UTC m=+1013.852114474" observedRunningTime="2025-10-14 07:04:27.38505381 +0000 UTC m=+1015.296137616" watchObservedRunningTime="2025-10-14 07:04:27.388074786 +0000 UTC m=+1015.299158592" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.420374 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" podStartSLOduration=3.168781101 podStartE2EDuration="15.420356985s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:13.664818563 +0000 UTC m=+1001.575902369" lastFinishedPulling="2025-10-14 07:04:25.916394427 +0000 UTC m=+1013.827478253" observedRunningTime="2025-10-14 07:04:27.412943974 +0000 UTC m=+1015.324027770" watchObservedRunningTime="2025-10-14 07:04:27.420356985 +0000 UTC m=+1015.331440791" Oct 14 07:04:27 crc kubenswrapper[5058]: I1014 07:04:27.438297 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" podStartSLOduration=3.571603865 podStartE2EDuration="15.438278535s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.074313967 +0000 UTC m=+1001.985397773" lastFinishedPulling="2025-10-14 07:04:25.940988637 +0000 UTC m=+1013.852072443" observedRunningTime="2025-10-14 07:04:27.437083781 +0000 UTC m=+1015.348167587" watchObservedRunningTime="2025-10-14 07:04:27.438278535 +0000 UTC m=+1015.349362341" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.377637 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" event={"ID":"f40617a5-db6e-429b-8665-58fa79067b23","Type":"ContainerStarted","Data":"ad86ef3e8494ce802a75eea835965ffd6623940f799ede35f13d5f4642aeff3d"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.377850 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.380066 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" event={"ID":"096b5116-c25a-4443-b8c0-af75f2b92262","Type":"ContainerStarted","Data":"ab4bf9a56a87c7f49ee5f160d156f3c0e5612872d57706a789a664e79ab277e6"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.380622 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.382463 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" event={"ID":"7dfea04c-ae28-4df3-addc-f14abc359748","Type":"ContainerStarted","Data":"e4629a5fa1da1754e0f5c1ec02886689dc44e3a8b3036585df06db56bd7f9ad0"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.382675 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.387838 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" event={"ID":"b59421ab-44ca-48a0-945b-f6c42153397d","Type":"ContainerStarted","Data":"eef63f41c0498f04e5354f271118109c09fb1dc75a1026119ce00e4af1dba33e"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.387958 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.398212 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" event={"ID":"871cbb71-8b23-4e59-958f-436fb42cf771","Type":"ContainerStarted","Data":"f4323e83023d6af5250141d165b114241aa63e968aeb5f277e83370abc93545d"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.399230 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.404029 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" event={"ID":"d3e72aae-75a1-4af7-8db5-2962735f6e86","Type":"ContainerStarted","Data":"fffeda84b3094260b8c933226da477815a6ba163b9f3883f8078bed11c65d819"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.404127 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.408806 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" podStartSLOduration=4.253045123 podStartE2EDuration="15.408772367s" podCreationTimestamp="2025-10-14 07:04:13 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.823392939 +0000 UTC m=+1002.734476745" lastFinishedPulling="2025-10-14 07:04:25.979120183 +0000 UTC m=+1013.890203989" observedRunningTime="2025-10-14 07:04:28.399502693 +0000 UTC m=+1016.310586519" watchObservedRunningTime="2025-10-14 07:04:28.408772367 +0000 UTC m=+1016.319856173" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.411582 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" event={"ID":"b5cb6101-1d0d-45af-9a55-09c33013c269","Type":"ContainerStarted","Data":"21f9ea38aa8b2e68957be49aaf49defc6f397db70a1168de0512b92a4891b42d"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.412117 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.422863 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" event={"ID":"88b4ba9d-c49f-47e2-aff8-c4e766d25bcc","Type":"ContainerStarted","Data":"74a2ab3a537617e6fa6d785082057aba66bf4a0e744196654f87e263c4d9e973"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.423665 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.424420 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" podStartSLOduration=5.288665307 podStartE2EDuration="16.424402142s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.81429855 +0000 UTC m=+1002.725382346" lastFinishedPulling="2025-10-14 07:04:25.950035365 +0000 UTC m=+1013.861119181" observedRunningTime="2025-10-14 07:04:28.420624334 +0000 UTC m=+1016.331708140" watchObservedRunningTime="2025-10-14 07:04:28.424402142 +0000 UTC m=+1016.335485958" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.432318 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" event={"ID":"853f3dc8-d9ec-4220-8190-dd7f5789e3ef","Type":"ContainerStarted","Data":"130f8a518dc7e30be5ae596fe9f963da7a789de596ebfa12532a6509ea2c6c21"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.433123 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.435983 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" event={"ID":"0285fda4-68ce-4cf0-bb77-50d2e3a66950","Type":"ContainerStarted","Data":"a956e1d282f30238f2ba7955050840ee519d3aaadba7dba260a314c5ff947009"} Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.436008 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.462620 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" podStartSLOduration=4.30038378 podStartE2EDuration="15.462605609s" podCreationTimestamp="2025-10-14 07:04:13 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.803105241 +0000 UTC m=+1002.714189037" lastFinishedPulling="2025-10-14 07:04:25.96532706 +0000 UTC m=+1013.876410866" observedRunningTime="2025-10-14 07:04:28.461303012 +0000 UTC m=+1016.372386848" watchObservedRunningTime="2025-10-14 07:04:28.462605609 +0000 UTC m=+1016.373689415" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.465436 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" podStartSLOduration=4.946035135 podStartE2EDuration="16.46542552s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.431267287 +0000 UTC m=+1002.342351093" lastFinishedPulling="2025-10-14 07:04:25.950657662 +0000 UTC m=+1013.861741478" observedRunningTime="2025-10-14 07:04:28.44505648 +0000 UTC m=+1016.356140296" watchObservedRunningTime="2025-10-14 07:04:28.46542552 +0000 UTC m=+1016.376509336" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.476470 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" podStartSLOduration=4.963329206 podStartE2EDuration="16.476452123s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.439515442 +0000 UTC m=+1002.350599238" lastFinishedPulling="2025-10-14 07:04:25.952638339 +0000 UTC m=+1013.863722155" observedRunningTime="2025-10-14 07:04:28.474845178 +0000 UTC m=+1016.385929004" watchObservedRunningTime="2025-10-14 07:04:28.476452123 +0000 UTC m=+1016.387535939" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.495377 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" podStartSLOduration=4.971021466 podStartE2EDuration="16.495361792s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.405826973 +0000 UTC m=+1002.316910779" lastFinishedPulling="2025-10-14 07:04:25.930167299 +0000 UTC m=+1013.841251105" observedRunningTime="2025-10-14 07:04:28.491608125 +0000 UTC m=+1016.402691961" watchObservedRunningTime="2025-10-14 07:04:28.495361792 +0000 UTC m=+1016.406445598" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.509958 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" podStartSLOduration=4.628002832 podStartE2EDuration="16.509940387s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.06809558 +0000 UTC m=+1001.979179386" lastFinishedPulling="2025-10-14 07:04:25.950033145 +0000 UTC m=+1013.861116941" observedRunningTime="2025-10-14 07:04:28.50690949 +0000 UTC m=+1016.417993306" watchObservedRunningTime="2025-10-14 07:04:28.509940387 +0000 UTC m=+1016.421024193" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.542702 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" podStartSLOduration=5.487103455 podStartE2EDuration="16.542682219s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.885112315 +0000 UTC m=+1002.796196121" lastFinishedPulling="2025-10-14 07:04:25.940691069 +0000 UTC m=+1013.851774885" observedRunningTime="2025-10-14 07:04:28.535669719 +0000 UTC m=+1016.446753515" watchObservedRunningTime="2025-10-14 07:04:28.542682219 +0000 UTC m=+1016.453766025" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.561628 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" podStartSLOduration=4.394085388 podStartE2EDuration="15.561610277s" podCreationTimestamp="2025-10-14 07:04:13 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.78407947 +0000 UTC m=+1002.695163276" lastFinishedPulling="2025-10-14 07:04:25.951604349 +0000 UTC m=+1013.862688165" observedRunningTime="2025-10-14 07:04:28.55573973 +0000 UTC m=+1016.466823546" watchObservedRunningTime="2025-10-14 07:04:28.561610277 +0000 UTC m=+1016.472694073" Oct 14 07:04:28 crc kubenswrapper[5058]: I1014 07:04:28.570230 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" podStartSLOduration=5.4684714549999995 podStartE2EDuration="16.570206482s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.834198176 +0000 UTC m=+1002.745281982" lastFinishedPulling="2025-10-14 07:04:25.935933193 +0000 UTC m=+1013.847017009" observedRunningTime="2025-10-14 07:04:28.567824874 +0000 UTC m=+1016.478908690" watchObservedRunningTime="2025-10-14 07:04:28.570206482 +0000 UTC m=+1016.481290288" Oct 14 07:04:31 crc kubenswrapper[5058]: I1014 07:04:31.456059 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" event={"ID":"287a5a36-ab23-48aa-bce3-c404703271d0","Type":"ContainerStarted","Data":"d280a5103fcb845b873504e73e6401072f083616b8f2d0865aed19f5b4c72ebf"} Oct 14 07:04:31 crc kubenswrapper[5058]: I1014 07:04:31.457443 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" Oct 14 07:04:31 crc kubenswrapper[5058]: I1014 07:04:31.460460 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" event={"ID":"156130c4-7aa0-4acd-b91b-d69ab4c86747","Type":"ContainerStarted","Data":"b0890781aeb6bb5681299ee59fd996e736a6017fb165d34b14e9ac471fa64c22"} Oct 14 07:04:31 crc kubenswrapper[5058]: I1014 07:04:31.461378 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" Oct 14 07:04:31 crc kubenswrapper[5058]: I1014 07:04:31.485691 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" podStartSLOduration=2.842006301 podStartE2EDuration="18.485670285s" podCreationTimestamp="2025-10-14 07:04:13 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.866103744 +0000 UTC m=+1002.777187550" lastFinishedPulling="2025-10-14 07:04:30.509767728 +0000 UTC m=+1018.420851534" observedRunningTime="2025-10-14 07:04:31.474028774 +0000 UTC m=+1019.385112640" watchObservedRunningTime="2025-10-14 07:04:31.485670285 +0000 UTC m=+1019.396754091" Oct 14 07:04:31 crc kubenswrapper[5058]: I1014 07:04:31.492769 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" podStartSLOduration=3.839709186 podStartE2EDuration="19.492753997s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.854220526 +0000 UTC m=+1002.765304332" lastFinishedPulling="2025-10-14 07:04:30.507265337 +0000 UTC m=+1018.418349143" observedRunningTime="2025-10-14 07:04:31.49041475 +0000 UTC m=+1019.401498576" watchObservedRunningTime="2025-10-14 07:04:31.492753997 +0000 UTC m=+1019.403837803" Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.474640 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" event={"ID":"8d419868-4916-4ad0-bb8b-b75dec32fd07","Type":"ContainerStarted","Data":"083fd475d23c38ea2568fd3374b4d3948c7004474a87167a371a737d54752470"} Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.475398 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.477973 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" event={"ID":"cf76dba6-ea6c-4e07-9261-0c960bbae828","Type":"ContainerStarted","Data":"37d3647a4ed3f10bea19f3b17ef185d408378f8f78d1b03ff612cfd612d680f4"} Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.478282 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.481921 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" event={"ID":"4b8f12d6-84f1-4095-817a-bbbfc74bb384","Type":"ContainerStarted","Data":"e9a5b7a11f2dbed6dc3a703d9d2f4d66d345c1ed4284685de431657e1f7e9efd"} Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.508105 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" podStartSLOduration=3.827174868 podStartE2EDuration="20.508081195s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.854110863 +0000 UTC m=+1002.765194659" lastFinishedPulling="2025-10-14 07:04:31.53501718 +0000 UTC m=+1019.446100986" observedRunningTime="2025-10-14 07:04:32.50334113 +0000 UTC m=+1020.414425006" watchObservedRunningTime="2025-10-14 07:04:32.508081195 +0000 UTC m=+1020.419165041" Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.524394 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" podStartSLOduration=4.859939454 podStartE2EDuration="20.524359678s" podCreationTimestamp="2025-10-14 07:04:12 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.865199769 +0000 UTC m=+1002.776283575" lastFinishedPulling="2025-10-14 07:04:30.529619983 +0000 UTC m=+1018.440703799" observedRunningTime="2025-10-14 07:04:32.52267212 +0000 UTC m=+1020.433755966" watchObservedRunningTime="2025-10-14 07:04:32.524359678 +0000 UTC m=+1020.435443524" Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.547654 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c" podStartSLOduration=2.868825184 podStartE2EDuration="19.547629571s" podCreationTimestamp="2025-10-14 07:04:13 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.855519153 +0000 UTC m=+1002.766602959" lastFinishedPulling="2025-10-14 07:04:31.53432355 +0000 UTC m=+1019.445407346" observedRunningTime="2025-10-14 07:04:32.541769984 +0000 UTC m=+1020.452853830" watchObservedRunningTime="2025-10-14 07:04:32.547629571 +0000 UTC m=+1020.458713427" Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.943300 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-ktwj8" Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.964882 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-tdlj7" Oct 14 07:04:32 crc kubenswrapper[5058]: I1014 07:04:32.986552 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-rzjp4" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.054637 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-b6kh5" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.076830 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-8sbg9" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.109395 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-c5sjr" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.150206 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-2gj7w" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.195381 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-sv4jh" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.196782 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-lsznt" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.499686 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-p27j7" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.594012 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-nds65" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.613015 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-mvrtg" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.711814 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-8tmms" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.737958 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5458f77c4-2fm9v" Oct 14 07:04:33 crc kubenswrapper[5058]: I1014 07:04:33.802767 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-2gfn7" Oct 14 07:04:34 crc kubenswrapper[5058]: I1014 07:04:34.184027 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55b7d44848jxd9b" Oct 14 07:04:34 crc kubenswrapper[5058]: I1014 07:04:34.499160 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" event={"ID":"a5666028-58d7-4a84-bdcb-4df88b8bbcfe","Type":"ContainerStarted","Data":"94c1f2e56c758fa5896ef77e78887e1c370f0a7b4348cdd441f9dd3fcaf72a3b"} Oct 14 07:04:34 crc kubenswrapper[5058]: I1014 07:04:34.500685 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" Oct 14 07:04:34 crc kubenswrapper[5058]: I1014 07:04:34.521672 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" podStartSLOduration=2.117044777 podStartE2EDuration="21.521652458s" podCreationTimestamp="2025-10-14 07:04:13 +0000 UTC" firstStartedPulling="2025-10-14 07:04:14.840337171 +0000 UTC m=+1002.751420967" lastFinishedPulling="2025-10-14 07:04:34.244944832 +0000 UTC m=+1022.156028648" observedRunningTime="2025-10-14 07:04:34.516317066 +0000 UTC m=+1022.427400902" watchObservedRunningTime="2025-10-14 07:04:34.521652458 +0000 UTC m=+1022.432736264" Oct 14 07:04:43 crc kubenswrapper[5058]: I1014 07:04:43.524959 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-j8qpq" Oct 14 07:04:43 crc kubenswrapper[5058]: I1014 07:04:43.562878 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5df598886f-hs54l" Oct 14 07:04:43 crc kubenswrapper[5058]: I1014 07:04:43.578475 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-nggc9" Oct 14 07:04:43 crc kubenswrapper[5058]: I1014 07:04:43.671579 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-ndx29" Oct 14 07:04:43 crc kubenswrapper[5058]: I1014 07:04:43.679418 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-gstgk" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.187458 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d487d97d7-z6zlq"] Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.189091 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.192601 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.192840 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.192925 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nv58c" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.193123 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.207569 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d487d97d7-z6zlq"] Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.243648 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6948694bd9-zh4dh"] Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.244751 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.246744 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.263314 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6948694bd9-zh4dh"] Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.302984 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7049559-846e-4f8a-b56f-d07d57108a88-config\") pod \"dnsmasq-dns-5d487d97d7-z6zlq\" (UID: \"f7049559-846e-4f8a-b56f-d07d57108a88\") " pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.303027 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-dns-svc\") pod \"dnsmasq-dns-6948694bd9-zh4dh\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.303080 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-config\") pod \"dnsmasq-dns-6948694bd9-zh4dh\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.303287 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4d5t\" (UniqueName: \"kubernetes.io/projected/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-kube-api-access-s4d5t\") pod \"dnsmasq-dns-6948694bd9-zh4dh\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.303462 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wqv\" (UniqueName: \"kubernetes.io/projected/f7049559-846e-4f8a-b56f-d07d57108a88-kube-api-access-k8wqv\") pod \"dnsmasq-dns-5d487d97d7-z6zlq\" (UID: \"f7049559-846e-4f8a-b56f-d07d57108a88\") " pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.404458 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7049559-846e-4f8a-b56f-d07d57108a88-config\") pod \"dnsmasq-dns-5d487d97d7-z6zlq\" (UID: \"f7049559-846e-4f8a-b56f-d07d57108a88\") " pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.404505 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-dns-svc\") pod \"dnsmasq-dns-6948694bd9-zh4dh\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.404573 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-config\") pod \"dnsmasq-dns-6948694bd9-zh4dh\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.404625 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4d5t\" (UniqueName: \"kubernetes.io/projected/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-kube-api-access-s4d5t\") pod \"dnsmasq-dns-6948694bd9-zh4dh\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.404675 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wqv\" (UniqueName: \"kubernetes.io/projected/f7049559-846e-4f8a-b56f-d07d57108a88-kube-api-access-k8wqv\") pod \"dnsmasq-dns-5d487d97d7-z6zlq\" (UID: \"f7049559-846e-4f8a-b56f-d07d57108a88\") " pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.405405 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-dns-svc\") pod \"dnsmasq-dns-6948694bd9-zh4dh\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.405405 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7049559-846e-4f8a-b56f-d07d57108a88-config\") pod \"dnsmasq-dns-5d487d97d7-z6zlq\" (UID: \"f7049559-846e-4f8a-b56f-d07d57108a88\") " pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.405473 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-config\") pod \"dnsmasq-dns-6948694bd9-zh4dh\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.422005 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4d5t\" (UniqueName: \"kubernetes.io/projected/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-kube-api-access-s4d5t\") pod \"dnsmasq-dns-6948694bd9-zh4dh\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.423525 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wqv\" (UniqueName: \"kubernetes.io/projected/f7049559-846e-4f8a-b56f-d07d57108a88-kube-api-access-k8wqv\") pod \"dnsmasq-dns-5d487d97d7-z6zlq\" (UID: \"f7049559-846e-4f8a-b56f-d07d57108a88\") " pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.506868 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.564544 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.936241 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d487d97d7-z6zlq"] Oct 14 07:05:01 crc kubenswrapper[5058]: I1014 07:05:01.944538 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 07:05:02 crc kubenswrapper[5058]: I1014 07:05:02.017620 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6948694bd9-zh4dh"] Oct 14 07:05:02 crc kubenswrapper[5058]: W1014 07:05:02.020988 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a77a8c8_3dc6_4cbe_b4f9_506d29713f26.slice/crio-06c51b88b4ed52dccb231f821ca56dcc0de293a06a903f80e0b7f631cd9d0296 WatchSource:0}: Error finding container 06c51b88b4ed52dccb231f821ca56dcc0de293a06a903f80e0b7f631cd9d0296: Status 404 returned error can't find the container with id 06c51b88b4ed52dccb231f821ca56dcc0de293a06a903f80e0b7f631cd9d0296 Oct 14 07:05:02 crc kubenswrapper[5058]: I1014 07:05:02.809524 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" event={"ID":"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26","Type":"ContainerStarted","Data":"06c51b88b4ed52dccb231f821ca56dcc0de293a06a903f80e0b7f631cd9d0296"} Oct 14 07:05:02 crc kubenswrapper[5058]: I1014 07:05:02.809589 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" event={"ID":"f7049559-846e-4f8a-b56f-d07d57108a88","Type":"ContainerStarted","Data":"97c74e3c0a2d5b261bdaf16f01274b588f357089aa76676bc05913b0f5a5de6e"} Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.231480 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d487d97d7-z6zlq"] Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.256077 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86f694bf-bzcc9"] Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.257701 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.261918 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86f694bf-bzcc9"] Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.356709 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-dns-svc\") pod \"dnsmasq-dns-86f694bf-bzcc9\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.357073 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkg4q\" (UniqueName: \"kubernetes.io/projected/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-kube-api-access-dkg4q\") pod \"dnsmasq-dns-86f694bf-bzcc9\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.357102 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-config\") pod \"dnsmasq-dns-86f694bf-bzcc9\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.459150 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-dns-svc\") pod \"dnsmasq-dns-86f694bf-bzcc9\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.459228 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkg4q\" (UniqueName: \"kubernetes.io/projected/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-kube-api-access-dkg4q\") pod \"dnsmasq-dns-86f694bf-bzcc9\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.459247 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-config\") pod \"dnsmasq-dns-86f694bf-bzcc9\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.460281 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-dns-svc\") pod \"dnsmasq-dns-86f694bf-bzcc9\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.460320 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-config\") pod \"dnsmasq-dns-86f694bf-bzcc9\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.480196 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkg4q\" (UniqueName: \"kubernetes.io/projected/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-kube-api-access-dkg4q\") pod \"dnsmasq-dns-86f694bf-bzcc9\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.580650 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.900514 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6948694bd9-zh4dh"] Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.934777 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7869c47d6c-zgc6r"] Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.935967 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.953446 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7869c47d6c-zgc6r"] Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.982023 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-config\") pod \"dnsmasq-dns-7869c47d6c-zgc6r\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.982109 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-dns-svc\") pod \"dnsmasq-dns-7869c47d6c-zgc6r\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:03 crc kubenswrapper[5058]: I1014 07:05:03.982150 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2rkg\" (UniqueName: \"kubernetes.io/projected/f47d13c9-e297-4139-8f8d-60bac20a81be-kube-api-access-n2rkg\") pod \"dnsmasq-dns-7869c47d6c-zgc6r\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.083645 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-dns-svc\") pod \"dnsmasq-dns-7869c47d6c-zgc6r\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.083714 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2rkg\" (UniqueName: \"kubernetes.io/projected/f47d13c9-e297-4139-8f8d-60bac20a81be-kube-api-access-n2rkg\") pod \"dnsmasq-dns-7869c47d6c-zgc6r\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.083811 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-config\") pod \"dnsmasq-dns-7869c47d6c-zgc6r\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.084752 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-config\") pod \"dnsmasq-dns-7869c47d6c-zgc6r\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.085423 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-dns-svc\") pod \"dnsmasq-dns-7869c47d6c-zgc6r\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.100775 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86f694bf-bzcc9"] Oct 14 07:05:04 crc kubenswrapper[5058]: W1014 07:05:04.103737 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bb4a9b8_e6bb_4de3_b025_8981a145c5ce.slice/crio-e39b0f545618b138dab02ea0698b480c3bca20ba8199587c7889cac2b8baacec WatchSource:0}: Error finding container e39b0f545618b138dab02ea0698b480c3bca20ba8199587c7889cac2b8baacec: Status 404 returned error can't find the container with id e39b0f545618b138dab02ea0698b480c3bca20ba8199587c7889cac2b8baacec Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.122406 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2rkg\" (UniqueName: \"kubernetes.io/projected/f47d13c9-e297-4139-8f8d-60bac20a81be-kube-api-access-n2rkg\") pod \"dnsmasq-dns-7869c47d6c-zgc6r\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.258037 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.404222 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.405619 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.407421 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.407571 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.407675 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.407840 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.408041 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pj9d9" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.408232 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.408706 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.421818 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.499658 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59f969a6-6fea-40c8-9254-284205f5b3ea-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.499741 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59f969a6-6fea-40c8-9254-284205f5b3ea-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.499935 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.500010 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bb8\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-kube-api-access-k8bb8\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.500040 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.500080 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.500138 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.500158 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.500210 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.500254 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.500276 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.601720 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59f969a6-6fea-40c8-9254-284205f5b3ea-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.601763 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59f969a6-6fea-40c8-9254-284205f5b3ea-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.601833 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.602577 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8bb8\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-kube-api-access-k8bb8\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.602617 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.602648 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.602689 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.602711 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.602745 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.602776 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.602823 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.603234 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.604341 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.605731 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.607006 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59f969a6-6fea-40c8-9254-284205f5b3ea-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.607217 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.609244 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59f969a6-6fea-40c8-9254-284205f5b3ea-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.609611 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.610291 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.610995 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.615307 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.623612 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8bb8\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-kube-api-access-k8bb8\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.641275 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.724656 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7869c47d6c-zgc6r"] Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.737554 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 07:05:04 crc kubenswrapper[5058]: W1014 07:05:04.738953 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf47d13c9_e297_4139_8f8d_60bac20a81be.slice/crio-3c05b12b948558e0899b30ac56edd748a4b9089641862af8803461daf70cc81f WatchSource:0}: Error finding container 3c05b12b948558e0899b30ac56edd748a4b9089641862af8803461daf70cc81f: Status 404 returned error can't find the container with id 3c05b12b948558e0899b30ac56edd748a4b9089641862af8803461daf70cc81f Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.833199 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" event={"ID":"f47d13c9-e297-4139-8f8d-60bac20a81be","Type":"ContainerStarted","Data":"3c05b12b948558e0899b30ac56edd748a4b9089641862af8803461daf70cc81f"} Oct 14 07:05:04 crc kubenswrapper[5058]: I1014 07:05:04.835785 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" event={"ID":"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce","Type":"ContainerStarted","Data":"e39b0f545618b138dab02ea0698b480c3bca20ba8199587c7889cac2b8baacec"} Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.036904 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.039519 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.041287 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.042057 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.042410 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.043152 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.043305 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.043343 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-w8n9n" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.044586 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.063147 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.110347 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.110676 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.110709 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.110765 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.110846 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.110921 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svlxp\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-kube-api-access-svlxp\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.111029 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.111114 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.111175 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.111208 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.111262 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.212673 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.212738 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svlxp\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-kube-api-access-svlxp\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.212774 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.212829 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.212859 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.212878 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.212908 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.212975 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.213011 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.213045 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.213076 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.217862 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.218070 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.218096 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.218494 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.218918 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.219049 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.227362 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.231437 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.235261 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.247572 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.247986 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.253893 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svlxp\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-kube-api-access-svlxp\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.268286 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.362391 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.845554 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59f969a6-6fea-40c8-9254-284205f5b3ea","Type":"ContainerStarted","Data":"1bbe031be043bd216e1a6072617f3e375af4c6d19a0d6b608698078751e8a162"} Oct 14 07:05:05 crc kubenswrapper[5058]: I1014 07:05:05.882615 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 07:05:05 crc kubenswrapper[5058]: W1014 07:05:05.896066 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb753342c_4a7e_4bf6_809a_3c5bc083ba6a.slice/crio-503a01fb18f33d2d3c6232ee4e9c88dbe5e7f4095c948dc5fdc0d02ac9e15af8 WatchSource:0}: Error finding container 503a01fb18f33d2d3c6232ee4e9c88dbe5e7f4095c948dc5fdc0d02ac9e15af8: Status 404 returned error can't find the container with id 503a01fb18f33d2d3c6232ee4e9c88dbe5e7f4095c948dc5fdc0d02ac9e15af8 Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.396308 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.398972 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.402568 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.403452 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.403464 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.403601 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.404742 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mzgfc" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.407733 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.410091 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.428931 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.428966 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-secrets\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.428988 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-kolla-config\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.429024 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.429045 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.429062 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2rcv\" (UniqueName: \"kubernetes.io/projected/ca7a9685-6d40-487b-aebf-f0a01ace044b-kube-api-access-l2rcv\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.429235 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.429386 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-default\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.429513 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.536381 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.536446 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.536613 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-secrets\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.536668 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-kolla-config\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.536745 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.536879 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.536936 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2rcv\" (UniqueName: \"kubernetes.io/projected/ca7a9685-6d40-487b-aebf-f0a01ace044b-kube-api-access-l2rcv\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.537027 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.537323 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-default\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.538189 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-kolla-config\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.538902 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.539077 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.539460 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.540652 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-default\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.540963 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-secrets\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.541987 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.554985 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2rcv\" (UniqueName: \"kubernetes.io/projected/ca7a9685-6d40-487b-aebf-f0a01ace044b-kube-api-access-l2rcv\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.555254 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.561961 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.737313 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 07:05:06 crc kubenswrapper[5058]: I1014 07:05:06.854729 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b753342c-4a7e-4bf6-809a-3c5bc083ba6a","Type":"ContainerStarted","Data":"503a01fb18f33d2d3c6232ee4e9c88dbe5e7f4095c948dc5fdc0d02ac9e15af8"} Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.725844 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.731662 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.738579 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ck7h9" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.739589 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.739936 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.740306 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.754475 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.754515 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.754546 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.754563 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.754589 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxtv5\" (UniqueName: \"kubernetes.io/projected/133d4cdf-58ed-4544-8f05-328587a2b701-kube-api-access-fxtv5\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.754616 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.754640 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.754689 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.754707 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.755522 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.861327 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.861427 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.861450 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.861473 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.862371 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.862879 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.862937 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.862982 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxtv5\" (UniqueName: \"kubernetes.io/projected/133d4cdf-58ed-4544-8f05-328587a2b701-kube-api-access-fxtv5\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.863026 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.863863 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.864862 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.869171 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.872508 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.873861 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.893106 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.893414 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.894174 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.894636 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxtv5\" (UniqueName: \"kubernetes.io/projected/133d4cdf-58ed-4544-8f05-328587a2b701-kube-api-access-fxtv5\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:07 crc kubenswrapper[5058]: I1014 07:05:07.910894 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.055227 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.056101 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.057255 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.066266 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.068611 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rzb7p" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.069115 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.069388 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.172625 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-kolla-config\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.172673 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.173296 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-config-data\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.173388 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2c7\" (UniqueName: \"kubernetes.io/projected/c2af2c14-fb00-45d0-8414-8754189455a0-kube-api-access-rc2c7\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.173511 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.274996 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-config-data\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.275332 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc2c7\" (UniqueName: \"kubernetes.io/projected/c2af2c14-fb00-45d0-8414-8754189455a0-kube-api-access-rc2c7\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.276419 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.276464 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-kolla-config\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.276493 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.276931 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-config-data\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.277525 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-kolla-config\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.281856 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.284446 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.292175 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc2c7\" (UniqueName: \"kubernetes.io/projected/c2af2c14-fb00-45d0-8414-8754189455a0-kube-api-access-rc2c7\") pod \"memcached-0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " pod="openstack/memcached-0" Oct 14 07:05:08 crc kubenswrapper[5058]: I1014 07:05:08.406135 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 07:05:09 crc kubenswrapper[5058]: I1014 07:05:09.730126 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:05:09 crc kubenswrapper[5058]: I1014 07:05:09.732124 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 07:05:09 crc kubenswrapper[5058]: I1014 07:05:09.734405 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-86ch9" Oct 14 07:05:09 crc kubenswrapper[5058]: I1014 07:05:09.745054 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:05:09 crc kubenswrapper[5058]: I1014 07:05:09.801768 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthkl\" (UniqueName: \"kubernetes.io/projected/527d8486-b4e1-4ee0-8618-c1bcfb73139b-kube-api-access-qthkl\") pod \"kube-state-metrics-0\" (UID: \"527d8486-b4e1-4ee0-8618-c1bcfb73139b\") " pod="openstack/kube-state-metrics-0" Oct 14 07:05:09 crc kubenswrapper[5058]: I1014 07:05:09.903550 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qthkl\" (UniqueName: \"kubernetes.io/projected/527d8486-b4e1-4ee0-8618-c1bcfb73139b-kube-api-access-qthkl\") pod \"kube-state-metrics-0\" (UID: \"527d8486-b4e1-4ee0-8618-c1bcfb73139b\") " pod="openstack/kube-state-metrics-0" Oct 14 07:05:09 crc kubenswrapper[5058]: I1014 07:05:09.928574 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qthkl\" (UniqueName: \"kubernetes.io/projected/527d8486-b4e1-4ee0-8618-c1bcfb73139b-kube-api-access-qthkl\") pod \"kube-state-metrics-0\" (UID: \"527d8486-b4e1-4ee0-8618-c1bcfb73139b\") " pod="openstack/kube-state-metrics-0" Oct 14 07:05:10 crc kubenswrapper[5058]: I1014 07:05:10.049758 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.593514 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.596425 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.604224 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.604341 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gcbk4" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.604388 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.605085 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.605271 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.608455 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.665745 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.665854 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.666035 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-config\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.666114 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.666176 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.666201 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jrsz\" (UniqueName: \"kubernetes.io/projected/74d9700c-c7fa-4020-939c-ce42c1b3afe8-kube-api-access-5jrsz\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.666242 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.666262 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.767013 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-config\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.767085 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.767167 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.767193 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jrsz\" (UniqueName: \"kubernetes.io/projected/74d9700c-c7fa-4020-939c-ce42c1b3afe8-kube-api-access-5jrsz\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.767276 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.767761 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.768329 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.768412 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.768416 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-config\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.768496 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.768702 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.768870 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.773689 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.773887 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.777021 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.796608 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jrsz\" (UniqueName: \"kubernetes.io/projected/74d9700c-c7fa-4020-939c-ce42c1b3afe8-kube-api-access-5jrsz\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.811775 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:13 crc kubenswrapper[5058]: I1014 07:05:13.952983 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.425811 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4gftc"] Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.426879 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.429539 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.434455 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.434719 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-bplx4" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.441721 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-wskz9"] Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.447125 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.458977 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gftc"] Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478636 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-ovn-controller-tls-certs\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478675 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-combined-ca-bundle\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478715 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478739 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-run\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478757 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-log-ovn\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478822 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6crf\" (UniqueName: \"kubernetes.io/projected/9fd6e5f9-1445-4903-9025-d468f23f82d4-kube-api-access-f6crf\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478838 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-lib\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478863 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-log\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478885 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run-ovn\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478905 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-etc-ovs\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478923 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fd6e5f9-1445-4903-9025-d468f23f82d4-scripts\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478942 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7chvv\" (UniqueName: \"kubernetes.io/projected/6faed14d-d25e-43b8-96db-c64b6b3feece-kube-api-access-7chvv\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.478964 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6faed14d-d25e-43b8-96db-c64b6b3feece-scripts\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.525761 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wskz9"] Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.579843 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6crf\" (UniqueName: \"kubernetes.io/projected/9fd6e5f9-1445-4903-9025-d468f23f82d4-kube-api-access-f6crf\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.579888 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-lib\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.579916 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-log\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.579939 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run-ovn\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.579955 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-etc-ovs\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.579971 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fd6e5f9-1445-4903-9025-d468f23f82d4-scripts\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.579987 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7chvv\" (UniqueName: \"kubernetes.io/projected/6faed14d-d25e-43b8-96db-c64b6b3feece-kube-api-access-7chvv\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.580007 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6faed14d-d25e-43b8-96db-c64b6b3feece-scripts\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.580040 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-ovn-controller-tls-certs\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.580055 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-combined-ca-bundle\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.580084 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.580103 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-run\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.580119 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-log-ovn\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.580578 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-log-ovn\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.582315 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fd6e5f9-1445-4903-9025-d468f23f82d4-scripts\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.582698 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-lib\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.582789 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-log\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.582837 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6faed14d-d25e-43b8-96db-c64b6b3feece-scripts\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.582913 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run-ovn\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.583062 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-etc-ovs\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.583141 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.583880 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-run\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.613662 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-ovn-controller-tls-certs\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.631215 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6crf\" (UniqueName: \"kubernetes.io/projected/9fd6e5f9-1445-4903-9025-d468f23f82d4-kube-api-access-f6crf\") pod \"ovn-controller-ovs-wskz9\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.631225 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-combined-ca-bundle\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.631582 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7chvv\" (UniqueName: \"kubernetes.io/projected/6faed14d-d25e-43b8-96db-c64b6b3feece-kube-api-access-7chvv\") pod \"ovn-controller-4gftc\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.751170 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc" Oct 14 07:05:14 crc kubenswrapper[5058]: I1014 07:05:14.781157 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.300665 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.302162 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.306378 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.306473 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.306700 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.306698 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fqgml" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.320259 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.404855 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.404947 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.404986 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg8bl\" (UniqueName: \"kubernetes.io/projected/a5a36ee5-eb19-4390-a07e-3e22a191ad58-kube-api-access-lg8bl\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.405004 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-config\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.405020 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.405046 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.405060 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.405104 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.506067 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.506151 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.506198 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.506215 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg8bl\" (UniqueName: \"kubernetes.io/projected/a5a36ee5-eb19-4390-a07e-3e22a191ad58-kube-api-access-lg8bl\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.506232 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-config\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.506249 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.506274 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.506288 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.507616 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.508088 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.508089 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-config\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.508296 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.513310 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.513317 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.515612 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.538999 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg8bl\" (UniqueName: \"kubernetes.io/projected/a5a36ee5-eb19-4390-a07e-3e22a191ad58-kube-api-access-lg8bl\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.552110 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:16 crc kubenswrapper[5058]: I1014 07:05:16.621767 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:17 crc kubenswrapper[5058]: I1014 07:05:17.925815 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.059684 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tns4c"] Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.061716 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.074558 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tns4c"] Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.194884 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2zct\" (UniqueName: \"kubernetes.io/projected/844444ca-eba1-401b-b802-07dbc84e7993-kube-api-access-t2zct\") pod \"community-operators-tns4c\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.194978 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-utilities\") pod \"community-operators-tns4c\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.195091 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-catalog-content\") pod \"community-operators-tns4c\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.296375 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-catalog-content\") pod \"community-operators-tns4c\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.296448 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2zct\" (UniqueName: \"kubernetes.io/projected/844444ca-eba1-401b-b802-07dbc84e7993-kube-api-access-t2zct\") pod \"community-operators-tns4c\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.296494 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-utilities\") pod \"community-operators-tns4c\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.296968 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-utilities\") pod \"community-operators-tns4c\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.297207 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-catalog-content\") pod \"community-operators-tns4c\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.335407 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2zct\" (UniqueName: \"kubernetes.io/projected/844444ca-eba1-401b-b802-07dbc84e7993-kube-api-access-t2zct\") pod \"community-operators-tns4c\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:21 crc kubenswrapper[5058]: I1014 07:05:21.402001 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.045875 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mknrj"] Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.048000 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.068494 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mknrj"] Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.161486 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkvbg\" (UniqueName: \"kubernetes.io/projected/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-kube-api-access-zkvbg\") pod \"redhat-marketplace-mknrj\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.161566 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-utilities\") pod \"redhat-marketplace-mknrj\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.161763 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-catalog-content\") pod \"redhat-marketplace-mknrj\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: W1014 07:05:25.238382 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod527d8486_b4e1_4ee0_8618_c1bcfb73139b.slice/crio-f753805f248e4265b5bfca9660824a6e7b368fb84e1f490753afaa3f99e0d1f0 WatchSource:0}: Error finding container f753805f248e4265b5bfca9660824a6e7b368fb84e1f490753afaa3f99e0d1f0: Status 404 returned error can't find the container with id f753805f248e4265b5bfca9660824a6e7b368fb84e1f490753afaa3f99e0d1f0 Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.263143 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-catalog-content\") pod \"redhat-marketplace-mknrj\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.263241 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkvbg\" (UniqueName: \"kubernetes.io/projected/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-kube-api-access-zkvbg\") pod \"redhat-marketplace-mknrj\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.263278 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-utilities\") pod \"redhat-marketplace-mknrj\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.263786 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-utilities\") pod \"redhat-marketplace-mknrj\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.263846 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-catalog-content\") pod \"redhat-marketplace-mknrj\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.291997 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkvbg\" (UniqueName: \"kubernetes.io/projected/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-kube-api-access-zkvbg\") pod \"redhat-marketplace-mknrj\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.384124 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.684430 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 07:05:25 crc kubenswrapper[5058]: I1014 07:05:25.777462 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 07:05:26 crc kubenswrapper[5058]: I1014 07:05:26.016874 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"527d8486-b4e1-4ee0-8618-c1bcfb73139b","Type":"ContainerStarted","Data":"f753805f248e4265b5bfca9660824a6e7b368fb84e1f490753afaa3f99e0d1f0"} Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.362048 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.363004 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkg4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86f694bf-bzcc9_openstack(5bb4a9b8-e6bb-4de3-b025-8981a145c5ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.365907 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" podUID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.414048 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.414184 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2rkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7869c47d6c-zgc6r_openstack(f47d13c9-e297-4139-8f8d-60bac20a81be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.415308 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" podUID="f47d13c9-e297-4139-8f8d-60bac20a81be" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.415357 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.415449 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4d5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6948694bd9-zh4dh_openstack(8a77a8c8-3dc6-4cbe-b4f9-506d29713f26): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.416838 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" podUID="8a77a8c8-3dc6-4cbe-b4f9-506d29713f26" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.439203 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.439326 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8wqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d487d97d7-z6zlq_openstack(f7049559-846e-4f8a-b56f-d07d57108a88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 07:05:26 crc kubenswrapper[5058]: E1014 07:05:26.440951 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" podUID="f7049559-846e-4f8a-b56f-d07d57108a88" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.025086 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca7a9685-6d40-487b-aebf-f0a01ace044b","Type":"ContainerStarted","Data":"978f4d2e7e004e78ae23e9f1eadde34cfc0fb72764091ba6730d46602cf48bd9"} Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.026840 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"133d4cdf-58ed-4544-8f05-328587a2b701","Type":"ContainerStarted","Data":"0cc5f9227cc192572877a2f92d8eda5a6e4efbb776baba69c6b8b5deecca2074"} Oct 14 07:05:27 crc kubenswrapper[5058]: E1014 07:05:27.029667 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a\\\"\"" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" podUID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" Oct 14 07:05:27 crc kubenswrapper[5058]: E1014 07:05:27.034396 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:673685cea6ea2dbd78bcb555955c1b9f05ea26018f79ee34494256a5f2d7b74a\\\"\"" pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" podUID="f47d13c9-e297-4139-8f8d-60bac20a81be" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.051074 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.202570 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gftc"] Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.216622 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tns4c"] Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.227060 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mknrj"] Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.234337 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.302510 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.392708 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wskz9"] Oct 14 07:05:27 crc kubenswrapper[5058]: W1014 07:05:27.399286 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod844444ca_eba1_401b_b802_07dbc84e7993.slice/crio-325c135c5785c192da9297f82d6c195d38db009f28da854b132cc63455cd4aa5 WatchSource:0}: Error finding container 325c135c5785c192da9297f82d6c195d38db009f28da854b132cc63455cd4aa5: Status 404 returned error can't find the container with id 325c135c5785c192da9297f82d6c195d38db009f28da854b132cc63455cd4aa5 Oct 14 07:05:27 crc kubenswrapper[5058]: W1014 07:05:27.496215 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5a36ee5_eb19_4390_a07e_3e22a191ad58.slice/crio-4ee8bd737ec846dff3c50189fe21187706e706b23a4049bf76b8bf258b3b9ea4 WatchSource:0}: Error finding container 4ee8bd737ec846dff3c50189fe21187706e706b23a4049bf76b8bf258b3b9ea4: Status 404 returned error can't find the container with id 4ee8bd737ec846dff3c50189fe21187706e706b23a4049bf76b8bf258b3b9ea4 Oct 14 07:05:27 crc kubenswrapper[5058]: W1014 07:05:27.606636 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8ea8d39_3133_41be_b3ed_16bc94b3e4a9.slice/crio-7073c8f0285eeba0e0c3abca4e1af8ff2ea62e67a35df5abaa3e3b0df4e8c5c4 WatchSource:0}: Error finding container 7073c8f0285eeba0e0c3abca4e1af8ff2ea62e67a35df5abaa3e3b0df4e8c5c4: Status 404 returned error can't find the container with id 7073c8f0285eeba0e0c3abca4e1af8ff2ea62e67a35df5abaa3e3b0df4e8c5c4 Oct 14 07:05:27 crc kubenswrapper[5058]: W1014 07:05:27.624579 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fd6e5f9_1445_4903_9025_d468f23f82d4.slice/crio-19bfff0a04242d1a0a8947c8c920e5d2f23c5f2b7cf6be922b8fe0c30c783ad4 WatchSource:0}: Error finding container 19bfff0a04242d1a0a8947c8c920e5d2f23c5f2b7cf6be922b8fe0c30c783ad4: Status 404 returned error can't find the container with id 19bfff0a04242d1a0a8947c8c920e5d2f23c5f2b7cf6be922b8fe0c30c783ad4 Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.689015 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.771382 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.803056 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-config\") pod \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.803110 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-dns-svc\") pod \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.803176 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4d5t\" (UniqueName: \"kubernetes.io/projected/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-kube-api-access-s4d5t\") pod \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\" (UID: \"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26\") " Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.803750 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-config" (OuterVolumeSpecName: "config") pod "8a77a8c8-3dc6-4cbe-b4f9-506d29713f26" (UID: "8a77a8c8-3dc6-4cbe-b4f9-506d29713f26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.803820 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a77a8c8-3dc6-4cbe-b4f9-506d29713f26" (UID: "8a77a8c8-3dc6-4cbe-b4f9-506d29713f26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.807950 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-kube-api-access-s4d5t" (OuterVolumeSpecName: "kube-api-access-s4d5t") pod "8a77a8c8-3dc6-4cbe-b4f9-506d29713f26" (UID: "8a77a8c8-3dc6-4cbe-b4f9-506d29713f26"). InnerVolumeSpecName "kube-api-access-s4d5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.907283 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8wqv\" (UniqueName: \"kubernetes.io/projected/f7049559-846e-4f8a-b56f-d07d57108a88-kube-api-access-k8wqv\") pod \"f7049559-846e-4f8a-b56f-d07d57108a88\" (UID: \"f7049559-846e-4f8a-b56f-d07d57108a88\") " Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.907452 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7049559-846e-4f8a-b56f-d07d57108a88-config\") pod \"f7049559-846e-4f8a-b56f-d07d57108a88\" (UID: \"f7049559-846e-4f8a-b56f-d07d57108a88\") " Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.907776 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.907801 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.907811 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4d5t\" (UniqueName: \"kubernetes.io/projected/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26-kube-api-access-s4d5t\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.908205 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7049559-846e-4f8a-b56f-d07d57108a88-config" (OuterVolumeSpecName: "config") pod "f7049559-846e-4f8a-b56f-d07d57108a88" (UID: "f7049559-846e-4f8a-b56f-d07d57108a88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:27 crc kubenswrapper[5058]: I1014 07:05:27.919043 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7049559-846e-4f8a-b56f-d07d57108a88-kube-api-access-k8wqv" (OuterVolumeSpecName: "kube-api-access-k8wqv") pod "f7049559-846e-4f8a-b56f-d07d57108a88" (UID: "f7049559-846e-4f8a-b56f-d07d57108a88"). InnerVolumeSpecName "kube-api-access-k8wqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.009394 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7049559-846e-4f8a-b56f-d07d57108a88-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.009427 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8wqv\" (UniqueName: \"kubernetes.io/projected/f7049559-846e-4f8a-b56f-d07d57108a88-kube-api-access-k8wqv\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.036321 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"74d9700c-c7fa-4020-939c-ce42c1b3afe8","Type":"ContainerStarted","Data":"cfbb8fc020bd5c7014360f2430cf5bf213f4b3a02fb43797a28fd4ed195bae5a"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.037102 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mknrj" event={"ID":"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9","Type":"ContainerStarted","Data":"7073c8f0285eeba0e0c3abca4e1af8ff2ea62e67a35df5abaa3e3b0df4e8c5c4"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.037926 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc" event={"ID":"6faed14d-d25e-43b8-96db-c64b6b3feece","Type":"ContainerStarted","Data":"7d7a6b93b1b7c173d5b833f758f9253e768cb913ec9d5db1349828e1f593605f"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.038958 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c2af2c14-fb00-45d0-8414-8754189455a0","Type":"ContainerStarted","Data":"cddbfd68bb2ac47b37e2ace5a28bc2e67d80087cbaacc2ec5c404da47ed4344e"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.039964 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tns4c" event={"ID":"844444ca-eba1-401b-b802-07dbc84e7993","Type":"ContainerStarted","Data":"325c135c5785c192da9297f82d6c195d38db009f28da854b132cc63455cd4aa5"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.040671 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" event={"ID":"8a77a8c8-3dc6-4cbe-b4f9-506d29713f26","Type":"ContainerDied","Data":"06c51b88b4ed52dccb231f821ca56dcc0de293a06a903f80e0b7f631cd9d0296"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.040689 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6948694bd9-zh4dh" Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.042818 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59f969a6-6fea-40c8-9254-284205f5b3ea","Type":"ContainerStarted","Data":"b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.044446 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a5a36ee5-eb19-4390-a07e-3e22a191ad58","Type":"ContainerStarted","Data":"4ee8bd737ec846dff3c50189fe21187706e706b23a4049bf76b8bf258b3b9ea4"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.046787 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b753342c-4a7e-4bf6-809a-3c5bc083ba6a","Type":"ContainerStarted","Data":"3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.048368 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.048375 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d487d97d7-z6zlq" event={"ID":"f7049559-846e-4f8a-b56f-d07d57108a88","Type":"ContainerDied","Data":"97c74e3c0a2d5b261bdaf16f01274b588f357089aa76676bc05913b0f5a5de6e"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.049422 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wskz9" event={"ID":"9fd6e5f9-1445-4903-9025-d468f23f82d4","Type":"ContainerStarted","Data":"19bfff0a04242d1a0a8947c8c920e5d2f23c5f2b7cf6be922b8fe0c30c783ad4"} Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.102217 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d487d97d7-z6zlq"] Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.108327 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d487d97d7-z6zlq"] Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.185659 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6948694bd9-zh4dh"] Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.200449 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6948694bd9-zh4dh"] Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.799014 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a77a8c8-3dc6-4cbe-b4f9-506d29713f26" path="/var/lib/kubelet/pods/8a77a8c8-3dc6-4cbe-b4f9-506d29713f26/volumes" Oct 14 07:05:28 crc kubenswrapper[5058]: I1014 07:05:28.799965 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7049559-846e-4f8a-b56f-d07d57108a88" path="/var/lib/kubelet/pods/f7049559-846e-4f8a-b56f-d07d57108a88/volumes" Oct 14 07:05:29 crc kubenswrapper[5058]: I1014 07:05:29.061030 5058 generic.go:334] "Generic (PLEG): container finished" podID="844444ca-eba1-401b-b802-07dbc84e7993" containerID="5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd" exitCode=0 Oct 14 07:05:29 crc kubenswrapper[5058]: I1014 07:05:29.061096 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tns4c" event={"ID":"844444ca-eba1-401b-b802-07dbc84e7993","Type":"ContainerDied","Data":"5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd"} Oct 14 07:05:29 crc kubenswrapper[5058]: I1014 07:05:29.065129 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"527d8486-b4e1-4ee0-8618-c1bcfb73139b","Type":"ContainerStarted","Data":"d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f"} Oct 14 07:05:29 crc kubenswrapper[5058]: I1014 07:05:29.065847 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 07:05:29 crc kubenswrapper[5058]: I1014 07:05:29.067969 5058 generic.go:334] "Generic (PLEG): container finished" podID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerID="d1a577da39acf9f68bc5e2ec079ceb7b4229a752b7706981df7a3823ac211104" exitCode=0 Oct 14 07:05:29 crc kubenswrapper[5058]: I1014 07:05:29.068216 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mknrj" event={"ID":"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9","Type":"ContainerDied","Data":"d1a577da39acf9f68bc5e2ec079ceb7b4229a752b7706981df7a3823ac211104"} Oct 14 07:05:29 crc kubenswrapper[5058]: I1014 07:05:29.103098 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.609466516 podStartE2EDuration="20.103080395s" podCreationTimestamp="2025-10-14 07:05:09 +0000 UTC" firstStartedPulling="2025-10-14 07:05:25.258847118 +0000 UTC m=+1073.169930964" lastFinishedPulling="2025-10-14 07:05:28.752461037 +0000 UTC m=+1076.663544843" observedRunningTime="2025-10-14 07:05:29.096704114 +0000 UTC m=+1077.007787930" watchObservedRunningTime="2025-10-14 07:05:29.103080395 +0000 UTC m=+1077.014164191" Oct 14 07:05:33 crc kubenswrapper[5058]: I1014 07:05:33.656485 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:05:33 crc kubenswrapper[5058]: I1014 07:05:33.657021 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:05:34 crc kubenswrapper[5058]: I1014 07:05:34.116250 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mknrj" event={"ID":"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9","Type":"ContainerStarted","Data":"2a30d6871f0cb9383146a4e232ec2645ac9fb89726f9499dd62e68e5a2339e93"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.131609 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c2af2c14-fb00-45d0-8414-8754189455a0","Type":"ContainerStarted","Data":"009dfda92aa7d854c85932f7c1982d950ec47b6a6ac6fe07c1206850d5895921"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.132152 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.135323 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"133d4cdf-58ed-4544-8f05-328587a2b701","Type":"ContainerStarted","Data":"63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.143259 5058 generic.go:334] "Generic (PLEG): container finished" podID="844444ca-eba1-401b-b802-07dbc84e7993" containerID="31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d" exitCode=0 Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.143387 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tns4c" event={"ID":"844444ca-eba1-401b-b802-07dbc84e7993","Type":"ContainerDied","Data":"31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.148459 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wskz9" event={"ID":"9fd6e5f9-1445-4903-9025-d468f23f82d4","Type":"ContainerDied","Data":"2a4e34472a8301fcad7040c61a4f765d6dca146ab018fd225366bade475734ce"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.148542 5058 generic.go:334] "Generic (PLEG): container finished" podID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerID="2a4e34472a8301fcad7040c61a4f765d6dca146ab018fd225366bade475734ce" exitCode=0 Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.156108 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"74d9700c-c7fa-4020-939c-ce42c1b3afe8","Type":"ContainerStarted","Data":"ddd9d9608be750d69855a085c54444295e513581feb904e658f9daaf62f4edf6"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.168942 5058 generic.go:334] "Generic (PLEG): container finished" podID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerID="2a30d6871f0cb9383146a4e232ec2645ac9fb89726f9499dd62e68e5a2339e93" exitCode=0 Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.169179 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mknrj" event={"ID":"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9","Type":"ContainerDied","Data":"2a30d6871f0cb9383146a4e232ec2645ac9fb89726f9499dd62e68e5a2339e93"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.174908 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a5a36ee5-eb19-4390-a07e-3e22a191ad58","Type":"ContainerStarted","Data":"754811d971b5ee596d7950d9970b2744d554f9a03ec5093c8a2ff92a72eb0be3"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.178129 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc" event={"ID":"6faed14d-d25e-43b8-96db-c64b6b3feece","Type":"ContainerStarted","Data":"3ed36db0ed54ce5c7569a1e901ed5f51357aac4b7b3444aec8115e75aced97c2"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.178190 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4gftc" Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.181374 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca7a9685-6d40-487b-aebf-f0a01ace044b","Type":"ContainerStarted","Data":"92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d"} Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.181504 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.96758476 podStartE2EDuration="27.181481125s" podCreationTimestamp="2025-10-14 07:05:08 +0000 UTC" firstStartedPulling="2025-10-14 07:05:27.60268338 +0000 UTC m=+1075.513767186" lastFinishedPulling="2025-10-14 07:05:33.816579735 +0000 UTC m=+1081.727663551" observedRunningTime="2025-10-14 07:05:35.161770034 +0000 UTC m=+1083.072853850" watchObservedRunningTime="2025-10-14 07:05:35.181481125 +0000 UTC m=+1083.092564941" Oct 14 07:05:35 crc kubenswrapper[5058]: I1014 07:05:35.257808 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4gftc" podStartSLOduration=14.925607395 podStartE2EDuration="21.257772836s" podCreationTimestamp="2025-10-14 07:05:14 +0000 UTC" firstStartedPulling="2025-10-14 07:05:27.600033315 +0000 UTC m=+1075.511117121" lastFinishedPulling="2025-10-14 07:05:33.932198756 +0000 UTC m=+1081.843282562" observedRunningTime="2025-10-14 07:05:35.253611528 +0000 UTC m=+1083.164695334" watchObservedRunningTime="2025-10-14 07:05:35.257772836 +0000 UTC m=+1083.168856642" Oct 14 07:05:36 crc kubenswrapper[5058]: I1014 07:05:36.192674 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mknrj" event={"ID":"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9","Type":"ContainerStarted","Data":"22fdf44043fe55935473ae22d6ad1137d2afc4ecb810ce8c9198ee0409c2cfc9"} Oct 14 07:05:36 crc kubenswrapper[5058]: I1014 07:05:36.195760 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tns4c" event={"ID":"844444ca-eba1-401b-b802-07dbc84e7993","Type":"ContainerStarted","Data":"a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8"} Oct 14 07:05:36 crc kubenswrapper[5058]: I1014 07:05:36.199298 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wskz9" event={"ID":"9fd6e5f9-1445-4903-9025-d468f23f82d4","Type":"ContainerStarted","Data":"a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c"} Oct 14 07:05:36 crc kubenswrapper[5058]: I1014 07:05:36.199344 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wskz9" event={"ID":"9fd6e5f9-1445-4903-9025-d468f23f82d4","Type":"ContainerStarted","Data":"3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3"} Oct 14 07:05:36 crc kubenswrapper[5058]: I1014 07:05:36.199548 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:36 crc kubenswrapper[5058]: I1014 07:05:36.213582 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mknrj" podStartSLOduration=4.640135923 podStartE2EDuration="11.213559951s" podCreationTimestamp="2025-10-14 07:05:25 +0000 UTC" firstStartedPulling="2025-10-14 07:05:29.071189319 +0000 UTC m=+1076.982273125" lastFinishedPulling="2025-10-14 07:05:35.644613347 +0000 UTC m=+1083.555697153" observedRunningTime="2025-10-14 07:05:36.212762788 +0000 UTC m=+1084.123846634" watchObservedRunningTime="2025-10-14 07:05:36.213559951 +0000 UTC m=+1084.124643767" Oct 14 07:05:36 crc kubenswrapper[5058]: I1014 07:05:36.237948 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-wskz9" podStartSLOduration=15.952820613 podStartE2EDuration="22.237932435s" podCreationTimestamp="2025-10-14 07:05:14 +0000 UTC" firstStartedPulling="2025-10-14 07:05:27.63076376 +0000 UTC m=+1075.541847566" lastFinishedPulling="2025-10-14 07:05:33.915875572 +0000 UTC m=+1081.826959388" observedRunningTime="2025-10-14 07:05:36.236133623 +0000 UTC m=+1084.147217459" watchObservedRunningTime="2025-10-14 07:05:36.237932435 +0000 UTC m=+1084.149016241" Oct 14 07:05:36 crc kubenswrapper[5058]: I1014 07:05:36.258051 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tns4c" podStartSLOduration=8.754919469 podStartE2EDuration="15.258033966s" podCreationTimestamp="2025-10-14 07:05:21 +0000 UTC" firstStartedPulling="2025-10-14 07:05:29.063823499 +0000 UTC m=+1076.974907345" lastFinishedPulling="2025-10-14 07:05:35.566938036 +0000 UTC m=+1083.478021842" observedRunningTime="2025-10-14 07:05:36.253099795 +0000 UTC m=+1084.164183611" watchObservedRunningTime="2025-10-14 07:05:36.258033966 +0000 UTC m=+1084.169117792" Oct 14 07:05:37 crc kubenswrapper[5058]: I1014 07:05:37.210394 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:05:38 crc kubenswrapper[5058]: I1014 07:05:38.223428 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a5a36ee5-eb19-4390-a07e-3e22a191ad58","Type":"ContainerStarted","Data":"a198fefec8d144e30064366b09f01772a3c0cf42d9352708a7015336d4c0e2d0"} Oct 14 07:05:38 crc kubenswrapper[5058]: I1014 07:05:38.227328 5058 generic.go:334] "Generic (PLEG): container finished" podID="133d4cdf-58ed-4544-8f05-328587a2b701" containerID="63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c" exitCode=0 Oct 14 07:05:38 crc kubenswrapper[5058]: I1014 07:05:38.227410 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"133d4cdf-58ed-4544-8f05-328587a2b701","Type":"ContainerDied","Data":"63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c"} Oct 14 07:05:38 crc kubenswrapper[5058]: I1014 07:05:38.231698 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"74d9700c-c7fa-4020-939c-ce42c1b3afe8","Type":"ContainerStarted","Data":"49e8270533c6a8d714a36b9d0639d16b7a4f6bbe36fd6e76e2691778fef147d6"} Oct 14 07:05:38 crc kubenswrapper[5058]: I1014 07:05:38.235134 5058 generic.go:334] "Generic (PLEG): container finished" podID="ca7a9685-6d40-487b-aebf-f0a01ace044b" containerID="92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d" exitCode=0 Oct 14 07:05:38 crc kubenswrapper[5058]: I1014 07:05:38.235994 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca7a9685-6d40-487b-aebf-f0a01ace044b","Type":"ContainerDied","Data":"92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d"} Oct 14 07:05:38 crc kubenswrapper[5058]: I1014 07:05:38.252037 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.793905373 podStartE2EDuration="23.252022231s" podCreationTimestamp="2025-10-14 07:05:15 +0000 UTC" firstStartedPulling="2025-10-14 07:05:27.500330607 +0000 UTC m=+1075.411414413" lastFinishedPulling="2025-10-14 07:05:37.958447435 +0000 UTC m=+1085.869531271" observedRunningTime="2025-10-14 07:05:38.249428897 +0000 UTC m=+1086.160512703" watchObservedRunningTime="2025-10-14 07:05:38.252022231 +0000 UTC m=+1086.163106037" Oct 14 07:05:38 crc kubenswrapper[5058]: I1014 07:05:38.289271 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.981536524 podStartE2EDuration="26.289256851s" podCreationTimestamp="2025-10-14 07:05:12 +0000 UTC" firstStartedPulling="2025-10-14 07:05:27.625018696 +0000 UTC m=+1075.536102502" lastFinishedPulling="2025-10-14 07:05:37.932739013 +0000 UTC m=+1085.843822829" observedRunningTime="2025-10-14 07:05:38.285830273 +0000 UTC m=+1086.196914089" watchObservedRunningTime="2025-10-14 07:05:38.289256851 +0000 UTC m=+1086.200340657" Oct 14 07:05:38 crc kubenswrapper[5058]: I1014 07:05:38.953773 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:39 crc kubenswrapper[5058]: I1014 07:05:39.246901 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"133d4cdf-58ed-4544-8f05-328587a2b701","Type":"ContainerStarted","Data":"9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0"} Oct 14 07:05:39 crc kubenswrapper[5058]: I1014 07:05:39.249278 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca7a9685-6d40-487b-aebf-f0a01ace044b","Type":"ContainerStarted","Data":"f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2"} Oct 14 07:05:39 crc kubenswrapper[5058]: I1014 07:05:39.251521 5058 generic.go:334] "Generic (PLEG): container finished" podID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" containerID="6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f" exitCode=0 Oct 14 07:05:39 crc kubenswrapper[5058]: I1014 07:05:39.251963 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" event={"ID":"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce","Type":"ContainerDied","Data":"6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f"} Oct 14 07:05:39 crc kubenswrapper[5058]: I1014 07:05:39.296387 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.807859492 podStartE2EDuration="33.296358756s" podCreationTimestamp="2025-10-14 07:05:06 +0000 UTC" firstStartedPulling="2025-10-14 07:05:26.374340118 +0000 UTC m=+1074.285423924" lastFinishedPulling="2025-10-14 07:05:33.862839392 +0000 UTC m=+1081.773923188" observedRunningTime="2025-10-14 07:05:39.282584394 +0000 UTC m=+1087.193668290" watchObservedRunningTime="2025-10-14 07:05:39.296358756 +0000 UTC m=+1087.207442592" Oct 14 07:05:39 crc kubenswrapper[5058]: I1014 07:05:39.356546 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.789482199 podStartE2EDuration="34.356522819s" podCreationTimestamp="2025-10-14 07:05:05 +0000 UTC" firstStartedPulling="2025-10-14 07:05:26.373972767 +0000 UTC m=+1074.285056573" lastFinishedPulling="2025-10-14 07:05:33.941013377 +0000 UTC m=+1081.852097193" observedRunningTime="2025-10-14 07:05:39.350145107 +0000 UTC m=+1087.261228933" watchObservedRunningTime="2025-10-14 07:05:39.356522819 +0000 UTC m=+1087.267606645" Oct 14 07:05:40 crc kubenswrapper[5058]: I1014 07:05:40.055832 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 07:05:40 crc kubenswrapper[5058]: I1014 07:05:40.261416 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" event={"ID":"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce","Type":"ContainerStarted","Data":"a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47"} Oct 14 07:05:40 crc kubenswrapper[5058]: I1014 07:05:40.261835 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:40 crc kubenswrapper[5058]: I1014 07:05:40.292940 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" podStartSLOduration=3.023552345 podStartE2EDuration="37.29291935s" podCreationTimestamp="2025-10-14 07:05:03 +0000 UTC" firstStartedPulling="2025-10-14 07:05:04.108246761 +0000 UTC m=+1052.019330567" lastFinishedPulling="2025-10-14 07:05:38.377613746 +0000 UTC m=+1086.288697572" observedRunningTime="2025-10-14 07:05:40.288053062 +0000 UTC m=+1088.199136878" watchObservedRunningTime="2025-10-14 07:05:40.29291935 +0000 UTC m=+1088.204003166" Oct 14 07:05:40 crc kubenswrapper[5058]: I1014 07:05:40.622870 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:40 crc kubenswrapper[5058]: I1014 07:05:40.683908 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:40 crc kubenswrapper[5058]: I1014 07:05:40.953602 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.020288 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.271896 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.344471 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.347318 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.402967 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.403090 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.506876 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.614671 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7869c47d6c-zgc6r"] Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.665813 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c5bcf59c-n544s"] Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.667422 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.670727 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.685159 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c5bcf59c-n544s"] Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.733682 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4x26x"] Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.734623 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.738773 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.751392 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4x26x"] Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.760421 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda633cb-5e3d-42da-b21e-be6d5a984f2f-config\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.760472 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovn-rundir\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.760492 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-combined-ca-bundle\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.760514 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcs8x\" (UniqueName: \"kubernetes.io/projected/fda633cb-5e3d-42da-b21e-be6d5a984f2f-kube-api-access-mcs8x\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.760541 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.760557 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-config\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.760709 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-dns-svc\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.760825 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8w5t\" (UniqueName: \"kubernetes.io/projected/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-kube-api-access-f8w5t\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.760865 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-ovsdbserver-nb\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.761006 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovs-rundir\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.824176 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86f694bf-bzcc9"] Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.852710 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d49c4d845-wxq85"] Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.855221 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.858085 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.863842 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-dns-svc\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.864004 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8w5t\" (UniqueName: \"kubernetes.io/projected/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-kube-api-access-f8w5t\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.864037 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-ovsdbserver-nb\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.864200 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovs-rundir\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.864235 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda633cb-5e3d-42da-b21e-be6d5a984f2f-config\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.864295 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovn-rundir\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.864322 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-combined-ca-bundle\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.864395 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcs8x\" (UniqueName: \"kubernetes.io/projected/fda633cb-5e3d-42da-b21e-be6d5a984f2f-kube-api-access-mcs8x\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.864597 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-config\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.864699 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.867237 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovs-rundir\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.874232 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovn-rundir\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.876885 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-ovsdbserver-nb\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.889418 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-dns-svc\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.890910 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8w5t\" (UniqueName: \"kubernetes.io/projected/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-kube-api-access-f8w5t\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.895296 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-config\") pod \"dnsmasq-dns-57c5bcf59c-n544s\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.905571 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.910938 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda633cb-5e3d-42da-b21e-be6d5a984f2f-config\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.916115 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcs8x\" (UniqueName: \"kubernetes.io/projected/fda633cb-5e3d-42da-b21e-be6d5a984f2f-kube-api-access-mcs8x\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.922746 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d49c4d845-wxq85"] Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.934663 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-combined-ca-bundle\") pod \"ovn-controller-metrics-4x26x\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.966116 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4bg\" (UniqueName: \"kubernetes.io/projected/fc2af502-68db-46c4-bc9b-b62a50830a61-kube-api-access-7j4bg\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.966190 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-sb\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.966208 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-dns-svc\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.966258 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-config\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.966277 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-nb\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.992940 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 14 07:05:41 crc kubenswrapper[5058]: I1014 07:05:41.994402 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.000105 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.000195 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.002601 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.005014 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qzc2l" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.008280 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.030427 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.057431 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.067783 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068037 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068370 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068554 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j4bg\" (UniqueName: \"kubernetes.io/projected/fc2af502-68db-46c4-bc9b-b62a50830a61-kube-api-access-7j4bg\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068593 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068625 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-sb\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068652 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-dns-svc\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068684 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-scripts\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068720 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-config\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068742 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb85\" (UniqueName: \"kubernetes.io/projected/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-kube-api-access-8cb85\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068768 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-nb\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068831 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.068859 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-config\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.070207 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-sb\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.071040 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-dns-svc\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.071160 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-config\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.071300 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-nb\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.088888 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j4bg\" (UniqueName: \"kubernetes.io/projected/fc2af502-68db-46c4-bc9b-b62a50830a61-kube-api-access-7j4bg\") pod \"dnsmasq-dns-d49c4d845-wxq85\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.169580 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2rkg\" (UniqueName: \"kubernetes.io/projected/f47d13c9-e297-4139-8f8d-60bac20a81be-kube-api-access-n2rkg\") pod \"f47d13c9-e297-4139-8f8d-60bac20a81be\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.169685 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-dns-svc\") pod \"f47d13c9-e297-4139-8f8d-60bac20a81be\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.169758 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-config\") pod \"f47d13c9-e297-4139-8f8d-60bac20a81be\" (UID: \"f47d13c9-e297-4139-8f8d-60bac20a81be\") " Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.170830 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-config" (OuterVolumeSpecName: "config") pod "f47d13c9-e297-4139-8f8d-60bac20a81be" (UID: "f47d13c9-e297-4139-8f8d-60bac20a81be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.170919 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f47d13c9-e297-4139-8f8d-60bac20a81be" (UID: "f47d13c9-e297-4139-8f8d-60bac20a81be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.171181 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.171275 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.171310 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-scripts\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.171338 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cb85\" (UniqueName: \"kubernetes.io/projected/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-kube-api-access-8cb85\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.171386 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.171408 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-config\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.171446 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.171510 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.171521 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47d13c9-e297-4139-8f8d-60bac20a81be-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.172764 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-scripts\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.173107 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.177311 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-config\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.177926 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.183913 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47d13c9-e297-4139-8f8d-60bac20a81be-kube-api-access-n2rkg" (OuterVolumeSpecName: "kube-api-access-n2rkg") pod "f47d13c9-e297-4139-8f8d-60bac20a81be" (UID: "f47d13c9-e297-4139-8f8d-60bac20a81be"). InnerVolumeSpecName "kube-api-access-n2rkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.184304 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.184499 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.184568 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.200349 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cb85\" (UniqueName: \"kubernetes.io/projected/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-kube-api-access-8cb85\") pod \"ovn-northd-0\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.273317 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2rkg\" (UniqueName: \"kubernetes.io/projected/f47d13c9-e297-4139-8f8d-60bac20a81be-kube-api-access-n2rkg\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.282058 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" event={"ID":"f47d13c9-e297-4139-8f8d-60bac20a81be","Type":"ContainerDied","Data":"3c05b12b948558e0899b30ac56edd748a4b9089641862af8803461daf70cc81f"} Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.282089 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7869c47d6c-zgc6r" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.282255 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" podUID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" containerName="dnsmasq-dns" containerID="cri-o://a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47" gracePeriod=10 Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.353191 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7869c47d6c-zgc6r"] Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.354184 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.365882 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.398583 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7869c47d6c-zgc6r"] Oct 14 07:05:42 crc kubenswrapper[5058]: W1014 07:05:42.426890 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfda633cb_5e3d_42da_b21e_be6d5a984f2f.slice/crio-43874a1d9610423e73e94af8b0fd491dded5237be0e2f524406b4d1b9908adf1 WatchSource:0}: Error finding container 43874a1d9610423e73e94af8b0fd491dded5237be0e2f524406b4d1b9908adf1: Status 404 returned error can't find the container with id 43874a1d9610423e73e94af8b0fd491dded5237be0e2f524406b4d1b9908adf1 Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.435967 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tns4c"] Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.452658 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4x26x"] Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.502905 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c5bcf59c-n544s"] Oct 14 07:05:42 crc kubenswrapper[5058]: W1014 07:05:42.529257 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a42559d_89e0_4b7b_8fa9_7b8e86b2fce5.slice/crio-23b6f0d5c7c77af153d31924a996205f15b303881ead7571abf16080a1247b45 WatchSource:0}: Error finding container 23b6f0d5c7c77af153d31924a996205f15b303881ead7571abf16080a1247b45: Status 404 returned error can't find the container with id 23b6f0d5c7c77af153d31924a996205f15b303881ead7571abf16080a1247b45 Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.667671 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d49c4d845-wxq85"] Oct 14 07:05:42 crc kubenswrapper[5058]: W1014 07:05:42.669977 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc2af502_68db_46c4_bc9b_b62a50830a61.slice/crio-ca8ffc943903e457069bc95778926481f055880a23a6398ca072569356b221c8 WatchSource:0}: Error finding container ca8ffc943903e457069bc95778926481f055880a23a6398ca072569356b221c8: Status 404 returned error can't find the container with id ca8ffc943903e457069bc95778926481f055880a23a6398ca072569356b221c8 Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.806319 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47d13c9-e297-4139-8f8d-60bac20a81be" path="/var/lib/kubelet/pods/f47d13c9-e297-4139-8f8d-60bac20a81be/volumes" Oct 14 07:05:42 crc kubenswrapper[5058]: I1014 07:05:42.806709 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.181238 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.288349 5058 generic.go:334] "Generic (PLEG): container finished" podID="fc2af502-68db-46c4-bc9b-b62a50830a61" containerID="6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46" exitCode=0 Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.288466 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" event={"ID":"fc2af502-68db-46c4-bc9b-b62a50830a61","Type":"ContainerDied","Data":"6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46"} Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.288506 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" event={"ID":"fc2af502-68db-46c4-bc9b-b62a50830a61","Type":"ContainerStarted","Data":"ca8ffc943903e457069bc95778926481f055880a23a6398ca072569356b221c8"} Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.289789 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4x26x" event={"ID":"fda633cb-5e3d-42da-b21e-be6d5a984f2f","Type":"ContainerStarted","Data":"dec221ebaaab946f5f28317e8fedd374c12dcc709014570c2b792f667fb91617"} Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.289847 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4x26x" event={"ID":"fda633cb-5e3d-42da-b21e-be6d5a984f2f","Type":"ContainerStarted","Data":"43874a1d9610423e73e94af8b0fd491dded5237be0e2f524406b4d1b9908adf1"} Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.290298 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-dns-svc\") pod \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.290454 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-config\") pod \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.290560 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkg4q\" (UniqueName: \"kubernetes.io/projected/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-kube-api-access-dkg4q\") pod \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\" (UID: \"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce\") " Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.294011 5058 generic.go:334] "Generic (PLEG): container finished" podID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" containerID="a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47" exitCode=0 Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.294086 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" event={"ID":"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce","Type":"ContainerDied","Data":"a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47"} Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.294117 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" event={"ID":"5bb4a9b8-e6bb-4de3-b025-8981a145c5ce","Type":"ContainerDied","Data":"e39b0f545618b138dab02ea0698b480c3bca20ba8199587c7889cac2b8baacec"} Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.294138 5058 scope.go:117] "RemoveContainer" containerID="a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.294274 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f694bf-bzcc9" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.299265 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466","Type":"ContainerStarted","Data":"a4ec591bdacb9729960b42f0f5a07978a7979936bd95702391a3c68836852f37"} Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.303736 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-kube-api-access-dkg4q" (OuterVolumeSpecName: "kube-api-access-dkg4q") pod "5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" (UID: "5bb4a9b8-e6bb-4de3-b025-8981a145c5ce"). InnerVolumeSpecName "kube-api-access-dkg4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.316253 5058 generic.go:334] "Generic (PLEG): container finished" podID="9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" containerID="28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e" exitCode=0 Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.316683 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" event={"ID":"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5","Type":"ContainerDied","Data":"28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e"} Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.316735 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" event={"ID":"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5","Type":"ContainerStarted","Data":"23b6f0d5c7c77af153d31924a996205f15b303881ead7571abf16080a1247b45"} Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.397287 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkg4q\" (UniqueName: \"kubernetes.io/projected/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-kube-api-access-dkg4q\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.399632 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4x26x" podStartSLOduration=2.399607146 podStartE2EDuration="2.399607146s" podCreationTimestamp="2025-10-14 07:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:05:43.359767212 +0000 UTC m=+1091.270851018" watchObservedRunningTime="2025-10-14 07:05:43.399607146 +0000 UTC m=+1091.310690952" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.410281 5058 scope.go:117] "RemoveContainer" containerID="6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.411976 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.436425 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" (UID: "5bb4a9b8-e6bb-4de3-b025-8981a145c5ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.445721 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-config" (OuterVolumeSpecName: "config") pod "5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" (UID: "5bb4a9b8-e6bb-4de3-b025-8981a145c5ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.468788 5058 scope.go:117] "RemoveContainer" containerID="a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47" Oct 14 07:05:43 crc kubenswrapper[5058]: E1014 07:05:43.471516 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47\": container with ID starting with a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47 not found: ID does not exist" containerID="a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.471568 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47"} err="failed to get container status \"a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47\": rpc error: code = NotFound desc = could not find container \"a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47\": container with ID starting with a90af3aebf583287a7cbd67613582b3ecae27fb990b0c246074f97d8042e6f47 not found: ID does not exist" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.471588 5058 scope.go:117] "RemoveContainer" containerID="6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f" Oct 14 07:05:43 crc kubenswrapper[5058]: E1014 07:05:43.471857 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f\": container with ID starting with 6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f not found: ID does not exist" containerID="6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.471874 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f"} err="failed to get container status \"6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f\": rpc error: code = NotFound desc = could not find container \"6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f\": container with ID starting with 6a2a7b9506df589b183781bed05dec9731787b572bc27acf1040da5409a0651f not found: ID does not exist" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.500488 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.505172 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.629594 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86f694bf-bzcc9"] Oct 14 07:05:43 crc kubenswrapper[5058]: I1014 07:05:43.634549 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86f694bf-bzcc9"] Oct 14 07:05:44 crc kubenswrapper[5058]: I1014 07:05:44.332843 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" event={"ID":"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5","Type":"ContainerStarted","Data":"bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd"} Oct 14 07:05:44 crc kubenswrapper[5058]: I1014 07:05:44.333316 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:44 crc kubenswrapper[5058]: I1014 07:05:44.337724 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" event={"ID":"fc2af502-68db-46c4-bc9b-b62a50830a61","Type":"ContainerStarted","Data":"22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085"} Oct 14 07:05:44 crc kubenswrapper[5058]: I1014 07:05:44.337941 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:44 crc kubenswrapper[5058]: I1014 07:05:44.340176 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tns4c" podUID="844444ca-eba1-401b-b802-07dbc84e7993" containerName="registry-server" containerID="cri-o://a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8" gracePeriod=2 Oct 14 07:05:44 crc kubenswrapper[5058]: I1014 07:05:44.349440 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" podStartSLOduration=3.349392329 podStartE2EDuration="3.349392329s" podCreationTimestamp="2025-10-14 07:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:05:44.348287288 +0000 UTC m=+1092.259371124" watchObservedRunningTime="2025-10-14 07:05:44.349392329 +0000 UTC m=+1092.260476155" Oct 14 07:05:44 crc kubenswrapper[5058]: I1014 07:05:44.372607 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" podStartSLOduration=3.372583479 podStartE2EDuration="3.372583479s" podCreationTimestamp="2025-10-14 07:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:05:44.366602479 +0000 UTC m=+1092.277686285" watchObservedRunningTime="2025-10-14 07:05:44.372583479 +0000 UTC m=+1092.283667325" Oct 14 07:05:44 crc kubenswrapper[5058]: I1014 07:05:44.860682 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" path="/var/lib/kubelet/pods/5bb4a9b8-e6bb-4de3-b025-8981a145c5ce/volumes" Oct 14 07:05:44 crc kubenswrapper[5058]: I1014 07:05:44.882031 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.036123 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2zct\" (UniqueName: \"kubernetes.io/projected/844444ca-eba1-401b-b802-07dbc84e7993-kube-api-access-t2zct\") pod \"844444ca-eba1-401b-b802-07dbc84e7993\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.036490 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-catalog-content\") pod \"844444ca-eba1-401b-b802-07dbc84e7993\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.036611 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-utilities\") pod \"844444ca-eba1-401b-b802-07dbc84e7993\" (UID: \"844444ca-eba1-401b-b802-07dbc84e7993\") " Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.037730 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-utilities" (OuterVolumeSpecName: "utilities") pod "844444ca-eba1-401b-b802-07dbc84e7993" (UID: "844444ca-eba1-401b-b802-07dbc84e7993"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.043660 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844444ca-eba1-401b-b802-07dbc84e7993-kube-api-access-t2zct" (OuterVolumeSpecName: "kube-api-access-t2zct") pod "844444ca-eba1-401b-b802-07dbc84e7993" (UID: "844444ca-eba1-401b-b802-07dbc84e7993"). InnerVolumeSpecName "kube-api-access-t2zct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.089637 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "844444ca-eba1-401b-b802-07dbc84e7993" (UID: "844444ca-eba1-401b-b802-07dbc84e7993"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.138816 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.138852 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2zct\" (UniqueName: \"kubernetes.io/projected/844444ca-eba1-401b-b802-07dbc84e7993-kube-api-access-t2zct\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.138864 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844444ca-eba1-401b-b802-07dbc84e7993-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.351390 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466","Type":"ContainerStarted","Data":"33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d"} Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.351451 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466","Type":"ContainerStarted","Data":"3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437"} Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.352175 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.355607 5058 generic.go:334] "Generic (PLEG): container finished" podID="844444ca-eba1-401b-b802-07dbc84e7993" containerID="a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8" exitCode=0 Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.355970 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tns4c" event={"ID":"844444ca-eba1-401b-b802-07dbc84e7993","Type":"ContainerDied","Data":"a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8"} Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.356042 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tns4c" event={"ID":"844444ca-eba1-401b-b802-07dbc84e7993","Type":"ContainerDied","Data":"325c135c5785c192da9297f82d6c195d38db009f28da854b132cc63455cd4aa5"} Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.356085 5058 scope.go:117] "RemoveContainer" containerID="a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.356096 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tns4c" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.371539 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.709088855 podStartE2EDuration="4.371522862s" podCreationTimestamp="2025-10-14 07:05:41 +0000 UTC" firstStartedPulling="2025-10-14 07:05:42.815864791 +0000 UTC m=+1090.726948607" lastFinishedPulling="2025-10-14 07:05:44.478298808 +0000 UTC m=+1092.389382614" observedRunningTime="2025-10-14 07:05:45.369538696 +0000 UTC m=+1093.280622532" watchObservedRunningTime="2025-10-14 07:05:45.371522862 +0000 UTC m=+1093.282606668" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.385107 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.385153 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.392753 5058 scope.go:117] "RemoveContainer" containerID="31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.411118 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tns4c"] Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.422379 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tns4c"] Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.428788 5058 scope.go:117] "RemoveContainer" containerID="5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.441253 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.477044 5058 scope.go:117] "RemoveContainer" containerID="a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8" Oct 14 07:05:45 crc kubenswrapper[5058]: E1014 07:05:45.479509 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8\": container with ID starting with a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8 not found: ID does not exist" containerID="a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.479557 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8"} err="failed to get container status \"a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8\": rpc error: code = NotFound desc = could not find container \"a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8\": container with ID starting with a48d91e847227bbdcde55d5f46ee6ecc91346b775adf993763356058d6a2aab8 not found: ID does not exist" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.479583 5058 scope.go:117] "RemoveContainer" containerID="31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d" Oct 14 07:05:45 crc kubenswrapper[5058]: E1014 07:05:45.480357 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d\": container with ID starting with 31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d not found: ID does not exist" containerID="31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.480379 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d"} err="failed to get container status \"31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d\": rpc error: code = NotFound desc = could not find container \"31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d\": container with ID starting with 31519ad3f99b8aedf41d989556d0bf42fbe25e2bc1edf15e30b5ab2a4f8f3b4d not found: ID does not exist" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.480397 5058 scope.go:117] "RemoveContainer" containerID="5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd" Oct 14 07:05:45 crc kubenswrapper[5058]: E1014 07:05:45.481144 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd\": container with ID starting with 5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd not found: ID does not exist" containerID="5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd" Oct 14 07:05:45 crc kubenswrapper[5058]: I1014 07:05:45.481275 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd"} err="failed to get container status \"5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd\": rpc error: code = NotFound desc = could not find container \"5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd\": container with ID starting with 5f68445cc7685de39f2f87cf471d98e77e27ff8a396e6484b00d7c62c72517fd not found: ID does not exist" Oct 14 07:05:46 crc kubenswrapper[5058]: I1014 07:05:46.448644 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:46 crc kubenswrapper[5058]: I1014 07:05:46.737838 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 14 07:05:46 crc kubenswrapper[5058]: I1014 07:05:46.737907 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 14 07:05:46 crc kubenswrapper[5058]: I1014 07:05:46.803352 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="844444ca-eba1-401b-b802-07dbc84e7993" path="/var/lib/kubelet/pods/844444ca-eba1-401b-b802-07dbc84e7993/volumes" Oct 14 07:05:46 crc kubenswrapper[5058]: I1014 07:05:46.815273 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 14 07:05:47 crc kubenswrapper[5058]: I1014 07:05:47.450773 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mknrj"] Oct 14 07:05:47 crc kubenswrapper[5058]: I1014 07:05:47.457186 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.056356 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.056432 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.100165 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jcflc"] Oct 14 07:05:48 crc kubenswrapper[5058]: E1014 07:05:48.100607 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" containerName="dnsmasq-dns" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.100630 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" containerName="dnsmasq-dns" Oct 14 07:05:48 crc kubenswrapper[5058]: E1014 07:05:48.100649 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844444ca-eba1-401b-b802-07dbc84e7993" containerName="extract-utilities" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.100659 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="844444ca-eba1-401b-b802-07dbc84e7993" containerName="extract-utilities" Oct 14 07:05:48 crc kubenswrapper[5058]: E1014 07:05:48.100696 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844444ca-eba1-401b-b802-07dbc84e7993" containerName="registry-server" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.100704 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="844444ca-eba1-401b-b802-07dbc84e7993" containerName="registry-server" Oct 14 07:05:48 crc kubenswrapper[5058]: E1014 07:05:48.100722 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" containerName="init" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.100729 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" containerName="init" Oct 14 07:05:48 crc kubenswrapper[5058]: E1014 07:05:48.100744 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844444ca-eba1-401b-b802-07dbc84e7993" containerName="extract-content" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.100751 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="844444ca-eba1-401b-b802-07dbc84e7993" containerName="extract-content" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.101050 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="844444ca-eba1-401b-b802-07dbc84e7993" containerName="registry-server" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.101076 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bb4a9b8-e6bb-4de3-b025-8981a145c5ce" containerName="dnsmasq-dns" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.101697 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jcflc" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.111118 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jcflc"] Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.157587 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.198777 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllb2\" (UniqueName: \"kubernetes.io/projected/61eee2da-b885-4d29-a8c0-3ea27e89d53f-kube-api-access-hllb2\") pod \"keystone-db-create-jcflc\" (UID: \"61eee2da-b885-4d29-a8c0-3ea27e89d53f\") " pod="openstack/keystone-db-create-jcflc" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.300729 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hllb2\" (UniqueName: \"kubernetes.io/projected/61eee2da-b885-4d29-a8c0-3ea27e89d53f-kube-api-access-hllb2\") pod \"keystone-db-create-jcflc\" (UID: \"61eee2da-b885-4d29-a8c0-3ea27e89d53f\") " pod="openstack/keystone-db-create-jcflc" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.302187 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xdsgk"] Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.303386 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xdsgk" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.310186 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xdsgk"] Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.333031 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hllb2\" (UniqueName: \"kubernetes.io/projected/61eee2da-b885-4d29-a8c0-3ea27e89d53f-kube-api-access-hllb2\") pod \"keystone-db-create-jcflc\" (UID: \"61eee2da-b885-4d29-a8c0-3ea27e89d53f\") " pod="openstack/keystone-db-create-jcflc" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.389198 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mknrj" podUID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerName="registry-server" containerID="cri-o://22fdf44043fe55935473ae22d6ad1137d2afc4ecb810ce8c9198ee0409c2cfc9" gracePeriod=2 Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.402888 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssnlx\" (UniqueName: \"kubernetes.io/projected/c456dea6-e279-4d34-b077-0404a2b61fe2-kube-api-access-ssnlx\") pod \"placement-db-create-xdsgk\" (UID: \"c456dea6-e279-4d34-b077-0404a2b61fe2\") " pod="openstack/placement-db-create-xdsgk" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.434584 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jcflc" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.450171 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.506997 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssnlx\" (UniqueName: \"kubernetes.io/projected/c456dea6-e279-4d34-b077-0404a2b61fe2-kube-api-access-ssnlx\") pod \"placement-db-create-xdsgk\" (UID: \"c456dea6-e279-4d34-b077-0404a2b61fe2\") " pod="openstack/placement-db-create-xdsgk" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.532964 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssnlx\" (UniqueName: \"kubernetes.io/projected/c456dea6-e279-4d34-b077-0404a2b61fe2-kube-api-access-ssnlx\") pod \"placement-db-create-xdsgk\" (UID: \"c456dea6-e279-4d34-b077-0404a2b61fe2\") " pod="openstack/placement-db-create-xdsgk" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.620350 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xdsgk" Oct 14 07:05:48 crc kubenswrapper[5058]: I1014 07:05:48.918062 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jcflc"] Oct 14 07:05:48 crc kubenswrapper[5058]: W1014 07:05:48.927113 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61eee2da_b885_4d29_a8c0_3ea27e89d53f.slice/crio-41b70557a2dbcbcf427b046003c8a9169a43b776de2c6799ce28bdc7d2720ea3 WatchSource:0}: Error finding container 41b70557a2dbcbcf427b046003c8a9169a43b776de2c6799ce28bdc7d2720ea3: Status 404 returned error can't find the container with id 41b70557a2dbcbcf427b046003c8a9169a43b776de2c6799ce28bdc7d2720ea3 Oct 14 07:05:49 crc kubenswrapper[5058]: W1014 07:05:49.043267 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc456dea6_e279_4d34_b077_0404a2b61fe2.slice/crio-725171c99554d99198ae289a420fecc15c24de985b81726c512a08de4d511e2a WatchSource:0}: Error finding container 725171c99554d99198ae289a420fecc15c24de985b81726c512a08de4d511e2a: Status 404 returned error can't find the container with id 725171c99554d99198ae289a420fecc15c24de985b81726c512a08de4d511e2a Oct 14 07:05:49 crc kubenswrapper[5058]: I1014 07:05:49.044854 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xdsgk"] Oct 14 07:05:49 crc kubenswrapper[5058]: I1014 07:05:49.400981 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xdsgk" event={"ID":"c456dea6-e279-4d34-b077-0404a2b61fe2","Type":"ContainerStarted","Data":"725171c99554d99198ae289a420fecc15c24de985b81726c512a08de4d511e2a"} Oct 14 07:05:49 crc kubenswrapper[5058]: I1014 07:05:49.405190 5058 generic.go:334] "Generic (PLEG): container finished" podID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerID="22fdf44043fe55935473ae22d6ad1137d2afc4ecb810ce8c9198ee0409c2cfc9" exitCode=0 Oct 14 07:05:49 crc kubenswrapper[5058]: I1014 07:05:49.405286 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mknrj" event={"ID":"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9","Type":"ContainerDied","Data":"22fdf44043fe55935473ae22d6ad1137d2afc4ecb810ce8c9198ee0409c2cfc9"} Oct 14 07:05:49 crc kubenswrapper[5058]: I1014 07:05:49.406714 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jcflc" event={"ID":"61eee2da-b885-4d29-a8c0-3ea27e89d53f","Type":"ContainerStarted","Data":"41b70557a2dbcbcf427b046003c8a9169a43b776de2c6799ce28bdc7d2720ea3"} Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.150129 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c5bcf59c-n544s"] Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.150642 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" podUID="9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" containerName="dnsmasq-dns" containerID="cri-o://bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd" gracePeriod=10 Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.154276 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.177572 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c757dd68f-q9w6l"] Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.180303 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.203925 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c757dd68f-q9w6l"] Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.242774 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-config\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.242899 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-nb\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.242986 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-sb\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.243011 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggnb4\" (UniqueName: \"kubernetes.io/projected/05a58868-3565-4a25-b847-9422039abe76-kube-api-access-ggnb4\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.243080 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-dns-svc\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.346698 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-dns-svc\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.346789 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-config\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.346868 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-nb\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.346931 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-sb\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.346957 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggnb4\" (UniqueName: \"kubernetes.io/projected/05a58868-3565-4a25-b847-9422039abe76-kube-api-access-ggnb4\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.347764 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-dns-svc\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.347824 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-nb\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.347854 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-sb\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.348339 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-config\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.365161 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggnb4\" (UniqueName: \"kubernetes.io/projected/05a58868-3565-4a25-b847-9422039abe76-kube-api-access-ggnb4\") pod \"dnsmasq-dns-c757dd68f-q9w6l\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.513110 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.951155 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c757dd68f-q9w6l"] Oct 14 07:05:50 crc kubenswrapper[5058]: I1014 07:05:50.974684 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.064426 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-utilities\") pod \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.064817 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-catalog-content\") pod \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.064910 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvbg\" (UniqueName: \"kubernetes.io/projected/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-kube-api-access-zkvbg\") pod \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\" (UID: \"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9\") " Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.066751 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-utilities" (OuterVolumeSpecName: "utilities") pod "e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" (UID: "e8ea8d39-3133-41be-b3ed-16bc94b3e4a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.074322 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-kube-api-access-zkvbg" (OuterVolumeSpecName: "kube-api-access-zkvbg") pod "e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" (UID: "e8ea8d39-3133-41be-b3ed-16bc94b3e4a9"). InnerVolumeSpecName "kube-api-access-zkvbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.081385 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" (UID: "e8ea8d39-3133-41be-b3ed-16bc94b3e4a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.092499 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.166231 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-ovsdbserver-nb\") pod \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.166434 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-dns-svc\") pod \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.166883 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8w5t\" (UniqueName: \"kubernetes.io/projected/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-kube-api-access-f8w5t\") pod \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.166923 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-config\") pod \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\" (UID: \"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5\") " Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.167377 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvbg\" (UniqueName: \"kubernetes.io/projected/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-kube-api-access-zkvbg\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.167392 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.167403 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.184282 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-kube-api-access-f8w5t" (OuterVolumeSpecName: "kube-api-access-f8w5t") pod "9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" (UID: "9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5"). InnerVolumeSpecName "kube-api-access-f8w5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.219218 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" (UID: "9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.220072 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-config" (OuterVolumeSpecName: "config") pod "9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" (UID: "9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.234241 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" (UID: "9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.268923 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.268959 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8w5t\" (UniqueName: \"kubernetes.io/projected/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-kube-api-access-f8w5t\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.268969 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.268978 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.289070 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.289833 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerName="extract-utilities" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.289922 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerName="extract-utilities" Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.290013 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerName="registry-server" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.290094 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerName="registry-server" Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.290174 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" containerName="init" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.290240 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" containerName="init" Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.290319 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" containerName="dnsmasq-dns" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.290385 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" containerName="dnsmasq-dns" Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.290464 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerName="extract-content" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.290530 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerName="extract-content" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.290787 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" containerName="registry-server" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.290896 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" containerName="dnsmasq-dns" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.296931 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.299209 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sfl9t" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.301759 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.301812 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.303837 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.311254 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.370714 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.370789 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-cache\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.370897 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.370955 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rfbv\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-kube-api-access-9rfbv\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.370987 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-lock\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.420882 5058 generic.go:334] "Generic (PLEG): container finished" podID="61eee2da-b885-4d29-a8c0-3ea27e89d53f" containerID="4ab24366db71622cf78f4f474ac5de29381375bdefa142cdd428db24843f358b" exitCode=0 Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.420951 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jcflc" event={"ID":"61eee2da-b885-4d29-a8c0-3ea27e89d53f","Type":"ContainerDied","Data":"4ab24366db71622cf78f4f474ac5de29381375bdefa142cdd428db24843f358b"} Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.423504 5058 generic.go:334] "Generic (PLEG): container finished" podID="9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" containerID="bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd" exitCode=0 Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.423542 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" event={"ID":"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5","Type":"ContainerDied","Data":"bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd"} Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.423569 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" event={"ID":"9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5","Type":"ContainerDied","Data":"23b6f0d5c7c77af153d31924a996205f15b303881ead7571abf16080a1247b45"} Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.423592 5058 scope.go:117] "RemoveContainer" containerID="bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.423625 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c5bcf59c-n544s" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.425137 5058 generic.go:334] "Generic (PLEG): container finished" podID="05a58868-3565-4a25-b847-9422039abe76" containerID="ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1" exitCode=0 Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.425530 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" event={"ID":"05a58868-3565-4a25-b847-9422039abe76","Type":"ContainerDied","Data":"ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1"} Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.425610 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" event={"ID":"05a58868-3565-4a25-b847-9422039abe76","Type":"ContainerStarted","Data":"ebc45d9d98c95e9e7af8ad8f632175915a662310074be27c1b33a0325680a785"} Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.427060 5058 generic.go:334] "Generic (PLEG): container finished" podID="c456dea6-e279-4d34-b077-0404a2b61fe2" containerID="4d72dbf398feb91143056dfbb8ffff02c4e1ac77a9f3809301bc450498d01697" exitCode=0 Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.427090 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xdsgk" event={"ID":"c456dea6-e279-4d34-b077-0404a2b61fe2","Type":"ContainerDied","Data":"4d72dbf398feb91143056dfbb8ffff02c4e1ac77a9f3809301bc450498d01697"} Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.428980 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mknrj" event={"ID":"e8ea8d39-3133-41be-b3ed-16bc94b3e4a9","Type":"ContainerDied","Data":"7073c8f0285eeba0e0c3abca4e1af8ff2ea62e67a35df5abaa3e3b0df4e8c5c4"} Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.429041 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mknrj" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.474058 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-cache\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.474305 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.474417 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rfbv\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-kube-api-access-9rfbv\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.474455 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-lock\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.474497 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.474877 5058 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.475428 5058 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.475354 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-lock\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.474981 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.475517 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-cache\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.476565 5058 scope.go:117] "RemoveContainer" containerID="28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e" Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.479242 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift podName:19857bcc-939e-4543-ae17-09d142baebf2 nodeName:}" failed. No retries permitted until 2025-10-14 07:05:51.9756035 +0000 UTC m=+1099.886687326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift") pod "swift-storage-0" (UID: "19857bcc-939e-4543-ae17-09d142baebf2") : configmap "swift-ring-files" not found Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.494878 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rfbv\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-kube-api-access-9rfbv\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.512528 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.601932 5058 scope.go:117] "RemoveContainer" containerID="bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd" Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.602935 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd\": container with ID starting with bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd not found: ID does not exist" containerID="bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.602980 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd"} err="failed to get container status \"bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd\": rpc error: code = NotFound desc = could not find container \"bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd\": container with ID starting with bbb30a80a4277e87fb4d55882420d9df5dc916b813e818428c93f03dd1da1bbd not found: ID does not exist" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.603006 5058 scope.go:117] "RemoveContainer" containerID="28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e" Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.603385 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e\": container with ID starting with 28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e not found: ID does not exist" containerID="28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.603443 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e"} err="failed to get container status \"28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e\": rpc error: code = NotFound desc = could not find container \"28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e\": container with ID starting with 28e6136464dc272838b2f0cdb82b4969b6de1d95f193e355818ad5454b87273e not found: ID does not exist" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.603457 5058 scope.go:117] "RemoveContainer" containerID="22fdf44043fe55935473ae22d6ad1137d2afc4ecb810ce8c9198ee0409c2cfc9" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.633283 5058 scope.go:117] "RemoveContainer" containerID="2a30d6871f0cb9383146a4e232ec2645ac9fb89726f9499dd62e68e5a2339e93" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.651613 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mknrj"] Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.661844 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mknrj"] Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.664479 5058 scope.go:117] "RemoveContainer" containerID="d1a577da39acf9f68bc5e2ec079ceb7b4229a752b7706981df7a3823ac211104" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.669185 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c5bcf59c-n544s"] Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.678580 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c5bcf59c-n544s"] Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.774457 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7kd74"] Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.775474 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.777541 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.777608 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.780928 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.793401 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7kd74"] Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.882189 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-swiftconf\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.882444 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7zj\" (UniqueName: \"kubernetes.io/projected/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-kube-api-access-wq7zj\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.882629 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-dispersionconf\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.882702 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-etc-swift\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.883365 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-combined-ca-bundle\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.883503 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-ring-data-devices\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.883605 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-scripts\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.985222 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.985448 5058 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.985481 5058 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 07:05:51 crc kubenswrapper[5058]: E1014 07:05:51.985549 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift podName:19857bcc-939e-4543-ae17-09d142baebf2 nodeName:}" failed. No retries permitted until 2025-10-14 07:05:52.985528294 +0000 UTC m=+1100.896612150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift") pod "swift-storage-0" (UID: "19857bcc-939e-4543-ae17-09d142baebf2") : configmap "swift-ring-files" not found Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.986363 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-combined-ca-bundle\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.986559 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-ring-data-devices\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.986613 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-scripts\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.986708 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-swiftconf\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.986832 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7zj\" (UniqueName: \"kubernetes.io/projected/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-kube-api-access-wq7zj\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.986939 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-dispersionconf\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.987009 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-etc-swift\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.987304 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-ring-data-devices\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.987610 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-etc-swift\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.987981 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-scripts\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.990574 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-dispersionconf\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.990653 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-combined-ca-bundle\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:51 crc kubenswrapper[5058]: I1014 07:05:51.991628 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-swiftconf\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.007473 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7zj\" (UniqueName: \"kubernetes.io/projected/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-kube-api-access-wq7zj\") pod \"swift-ring-rebalance-7kd74\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.099116 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.187089 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.438969 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" event={"ID":"05a58868-3565-4a25-b847-9422039abe76","Type":"ContainerStarted","Data":"b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98"} Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.439369 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.458184 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" podStartSLOduration=2.458162347 podStartE2EDuration="2.458162347s" podCreationTimestamp="2025-10-14 07:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:05:52.4547691 +0000 UTC m=+1100.365852916" watchObservedRunningTime="2025-10-14 07:05:52.458162347 +0000 UTC m=+1100.369246163" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.593453 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7kd74"] Oct 14 07:05:52 crc kubenswrapper[5058]: W1014 07:05:52.609557 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a4206c6_4aa9_49a8_b6fb_c88828d6eb59.slice/crio-4f205f28d6af63dee0198a8201538c603c573b7aa04bbb1f6c14196bc6fc61d8 WatchSource:0}: Error finding container 4f205f28d6af63dee0198a8201538c603c573b7aa04bbb1f6c14196bc6fc61d8: Status 404 returned error can't find the container with id 4f205f28d6af63dee0198a8201538c603c573b7aa04bbb1f6c14196bc6fc61d8 Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.802418 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xdsgk" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.803392 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5" path="/var/lib/kubelet/pods/9a42559d-89e0-4b7b-8fa9-7b8e86b2fce5/volumes" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.804531 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ea8d39-3133-41be-b3ed-16bc94b3e4a9" path="/var/lib/kubelet/pods/e8ea8d39-3133-41be-b3ed-16bc94b3e4a9/volumes" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.857675 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jcflc" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.902811 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hllb2\" (UniqueName: \"kubernetes.io/projected/61eee2da-b885-4d29-a8c0-3ea27e89d53f-kube-api-access-hllb2\") pod \"61eee2da-b885-4d29-a8c0-3ea27e89d53f\" (UID: \"61eee2da-b885-4d29-a8c0-3ea27e89d53f\") " Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.902884 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssnlx\" (UniqueName: \"kubernetes.io/projected/c456dea6-e279-4d34-b077-0404a2b61fe2-kube-api-access-ssnlx\") pod \"c456dea6-e279-4d34-b077-0404a2b61fe2\" (UID: \"c456dea6-e279-4d34-b077-0404a2b61fe2\") " Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.908151 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61eee2da-b885-4d29-a8c0-3ea27e89d53f-kube-api-access-hllb2" (OuterVolumeSpecName: "kube-api-access-hllb2") pod "61eee2da-b885-4d29-a8c0-3ea27e89d53f" (UID: "61eee2da-b885-4d29-a8c0-3ea27e89d53f"). InnerVolumeSpecName "kube-api-access-hllb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:52 crc kubenswrapper[5058]: I1014 07:05:52.908499 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c456dea6-e279-4d34-b077-0404a2b61fe2-kube-api-access-ssnlx" (OuterVolumeSpecName: "kube-api-access-ssnlx") pod "c456dea6-e279-4d34-b077-0404a2b61fe2" (UID: "c456dea6-e279-4d34-b077-0404a2b61fe2"). InnerVolumeSpecName "kube-api-access-ssnlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.005559 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:53 crc kubenswrapper[5058]: E1014 07:05:53.005746 5058 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 07:05:53 crc kubenswrapper[5058]: E1014 07:05:53.005770 5058 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.005789 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hllb2\" (UniqueName: \"kubernetes.io/projected/61eee2da-b885-4d29-a8c0-3ea27e89d53f-kube-api-access-hllb2\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:53 crc kubenswrapper[5058]: E1014 07:05:53.005838 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift podName:19857bcc-939e-4543-ae17-09d142baebf2 nodeName:}" failed. No retries permitted until 2025-10-14 07:05:55.005819335 +0000 UTC m=+1102.916903141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift") pod "swift-storage-0" (UID: "19857bcc-939e-4543-ae17-09d142baebf2") : configmap "swift-ring-files" not found Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.005863 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssnlx\" (UniqueName: \"kubernetes.io/projected/c456dea6-e279-4d34-b077-0404a2b61fe2-kube-api-access-ssnlx\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.453388 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jcflc" event={"ID":"61eee2da-b885-4d29-a8c0-3ea27e89d53f","Type":"ContainerDied","Data":"41b70557a2dbcbcf427b046003c8a9169a43b776de2c6799ce28bdc7d2720ea3"} Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.453426 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jcflc" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.453435 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b70557a2dbcbcf427b046003c8a9169a43b776de2c6799ce28bdc7d2720ea3" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.459898 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kd74" event={"ID":"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59","Type":"ContainerStarted","Data":"4f205f28d6af63dee0198a8201538c603c573b7aa04bbb1f6c14196bc6fc61d8"} Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.463143 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xdsgk" event={"ID":"c456dea6-e279-4d34-b077-0404a2b61fe2","Type":"ContainerDied","Data":"725171c99554d99198ae289a420fecc15c24de985b81726c512a08de4d511e2a"} Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.463189 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="725171c99554d99198ae289a420fecc15c24de985b81726c512a08de4d511e2a" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.463190 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xdsgk" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.575937 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fjrhc"] Oct 14 07:05:53 crc kubenswrapper[5058]: E1014 07:05:53.576628 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c456dea6-e279-4d34-b077-0404a2b61fe2" containerName="mariadb-database-create" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.576644 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c456dea6-e279-4d34-b077-0404a2b61fe2" containerName="mariadb-database-create" Oct 14 07:05:53 crc kubenswrapper[5058]: E1014 07:05:53.576663 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61eee2da-b885-4d29-a8c0-3ea27e89d53f" containerName="mariadb-database-create" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.576669 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="61eee2da-b885-4d29-a8c0-3ea27e89d53f" containerName="mariadb-database-create" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.576917 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="61eee2da-b885-4d29-a8c0-3ea27e89d53f" containerName="mariadb-database-create" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.576936 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c456dea6-e279-4d34-b077-0404a2b61fe2" containerName="mariadb-database-create" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.577624 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fjrhc" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.585867 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fjrhc"] Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.621928 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mm2d\" (UniqueName: \"kubernetes.io/projected/2a95c82c-56a0-469f-bafc-f7beb19a72ba-kube-api-access-5mm2d\") pod \"glance-db-create-fjrhc\" (UID: \"2a95c82c-56a0-469f-bafc-f7beb19a72ba\") " pod="openstack/glance-db-create-fjrhc" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.723306 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mm2d\" (UniqueName: \"kubernetes.io/projected/2a95c82c-56a0-469f-bafc-f7beb19a72ba-kube-api-access-5mm2d\") pod \"glance-db-create-fjrhc\" (UID: \"2a95c82c-56a0-469f-bafc-f7beb19a72ba\") " pod="openstack/glance-db-create-fjrhc" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.756093 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mm2d\" (UniqueName: \"kubernetes.io/projected/2a95c82c-56a0-469f-bafc-f7beb19a72ba-kube-api-access-5mm2d\") pod \"glance-db-create-fjrhc\" (UID: \"2a95c82c-56a0-469f-bafc-f7beb19a72ba\") " pod="openstack/glance-db-create-fjrhc" Oct 14 07:05:53 crc kubenswrapper[5058]: I1014 07:05:53.905502 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fjrhc" Oct 14 07:05:55 crc kubenswrapper[5058]: I1014 07:05:55.046198 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:55 crc kubenswrapper[5058]: E1014 07:05:55.046387 5058 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 07:05:55 crc kubenswrapper[5058]: E1014 07:05:55.046685 5058 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 07:05:55 crc kubenswrapper[5058]: E1014 07:05:55.046750 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift podName:19857bcc-939e-4543-ae17-09d142baebf2 nodeName:}" failed. No retries permitted until 2025-10-14 07:05:59.04673025 +0000 UTC m=+1106.957814056 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift") pod "swift-storage-0" (UID: "19857bcc-939e-4543-ae17-09d142baebf2") : configmap "swift-ring-files" not found Oct 14 07:05:55 crc kubenswrapper[5058]: I1014 07:05:55.980515 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fjrhc"] Oct 14 07:05:55 crc kubenswrapper[5058]: W1014 07:05:55.993759 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a95c82c_56a0_469f_bafc_f7beb19a72ba.slice/crio-0c9129d86059df175be211b16a33e6780f23bd73af265adee1db0d193582adef WatchSource:0}: Error finding container 0c9129d86059df175be211b16a33e6780f23bd73af265adee1db0d193582adef: Status 404 returned error can't find the container with id 0c9129d86059df175be211b16a33e6780f23bd73af265adee1db0d193582adef Oct 14 07:05:56 crc kubenswrapper[5058]: I1014 07:05:56.489206 5058 generic.go:334] "Generic (PLEG): container finished" podID="2a95c82c-56a0-469f-bafc-f7beb19a72ba" containerID="662166b77c72efe18e0ab60702f8c96f49d147c5baff21546e4f166ab46b0225" exitCode=0 Oct 14 07:05:56 crc kubenswrapper[5058]: I1014 07:05:56.489308 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fjrhc" event={"ID":"2a95c82c-56a0-469f-bafc-f7beb19a72ba","Type":"ContainerDied","Data":"662166b77c72efe18e0ab60702f8c96f49d147c5baff21546e4f166ab46b0225"} Oct 14 07:05:56 crc kubenswrapper[5058]: I1014 07:05:56.489374 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fjrhc" event={"ID":"2a95c82c-56a0-469f-bafc-f7beb19a72ba","Type":"ContainerStarted","Data":"0c9129d86059df175be211b16a33e6780f23bd73af265adee1db0d193582adef"} Oct 14 07:05:56 crc kubenswrapper[5058]: I1014 07:05:56.491561 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kd74" event={"ID":"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59","Type":"ContainerStarted","Data":"215a9a37d4d6e18d5aa8de66dd691d94290c736de1390cea3886c366d4223a76"} Oct 14 07:05:56 crc kubenswrapper[5058]: I1014 07:05:56.529059 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7kd74" podStartSLOduration=2.577599353 podStartE2EDuration="5.529040522s" podCreationTimestamp="2025-10-14 07:05:51 +0000 UTC" firstStartedPulling="2025-10-14 07:05:52.612643254 +0000 UTC m=+1100.523727070" lastFinishedPulling="2025-10-14 07:05:55.564084433 +0000 UTC m=+1103.475168239" observedRunningTime="2025-10-14 07:05:56.5237634 +0000 UTC m=+1104.434847226" watchObservedRunningTime="2025-10-14 07:05:56.529040522 +0000 UTC m=+1104.440124318" Oct 14 07:05:57 crc kubenswrapper[5058]: I1014 07:05:57.444956 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 14 07:05:57 crc kubenswrapper[5058]: I1014 07:05:57.852059 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fjrhc" Oct 14 07:05:57 crc kubenswrapper[5058]: I1014 07:05:57.902233 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mm2d\" (UniqueName: \"kubernetes.io/projected/2a95c82c-56a0-469f-bafc-f7beb19a72ba-kube-api-access-5mm2d\") pod \"2a95c82c-56a0-469f-bafc-f7beb19a72ba\" (UID: \"2a95c82c-56a0-469f-bafc-f7beb19a72ba\") " Oct 14 07:05:57 crc kubenswrapper[5058]: I1014 07:05:57.911775 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a95c82c-56a0-469f-bafc-f7beb19a72ba-kube-api-access-5mm2d" (OuterVolumeSpecName: "kube-api-access-5mm2d") pod "2a95c82c-56a0-469f-bafc-f7beb19a72ba" (UID: "2a95c82c-56a0-469f-bafc-f7beb19a72ba"). InnerVolumeSpecName "kube-api-access-5mm2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:05:58 crc kubenswrapper[5058]: I1014 07:05:58.004260 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mm2d\" (UniqueName: \"kubernetes.io/projected/2a95c82c-56a0-469f-bafc-f7beb19a72ba-kube-api-access-5mm2d\") on node \"crc\" DevicePath \"\"" Oct 14 07:05:58 crc kubenswrapper[5058]: I1014 07:05:58.510566 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fjrhc" event={"ID":"2a95c82c-56a0-469f-bafc-f7beb19a72ba","Type":"ContainerDied","Data":"0c9129d86059df175be211b16a33e6780f23bd73af265adee1db0d193582adef"} Oct 14 07:05:58 crc kubenswrapper[5058]: I1014 07:05:58.510616 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c9129d86059df175be211b16a33e6780f23bd73af265adee1db0d193582adef" Oct 14 07:05:58 crc kubenswrapper[5058]: I1014 07:05:58.510651 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fjrhc" Oct 14 07:05:59 crc kubenswrapper[5058]: I1014 07:05:59.122653 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:05:59 crc kubenswrapper[5058]: E1014 07:05:59.122872 5058 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 07:05:59 crc kubenswrapper[5058]: E1014 07:05:59.124333 5058 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 07:05:59 crc kubenswrapper[5058]: E1014 07:05:59.124400 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift podName:19857bcc-939e-4543-ae17-09d142baebf2 nodeName:}" failed. No retries permitted until 2025-10-14 07:06:07.12437775 +0000 UTC m=+1115.035461566 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift") pod "swift-storage-0" (UID: "19857bcc-939e-4543-ae17-09d142baebf2") : configmap "swift-ring-files" not found Oct 14 07:06:00 crc kubenswrapper[5058]: I1014 07:06:00.515053 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:06:00 crc kubenswrapper[5058]: I1014 07:06:00.535774 5058 generic.go:334] "Generic (PLEG): container finished" podID="59f969a6-6fea-40c8-9254-284205f5b3ea" containerID="b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20" exitCode=0 Oct 14 07:06:00 crc kubenswrapper[5058]: I1014 07:06:00.535882 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59f969a6-6fea-40c8-9254-284205f5b3ea","Type":"ContainerDied","Data":"b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20"} Oct 14 07:06:00 crc kubenswrapper[5058]: I1014 07:06:00.540459 5058 generic.go:334] "Generic (PLEG): container finished" podID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" containerID="3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c" exitCode=0 Oct 14 07:06:00 crc kubenswrapper[5058]: I1014 07:06:00.540523 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b753342c-4a7e-4bf6-809a-3c5bc083ba6a","Type":"ContainerDied","Data":"3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c"} Oct 14 07:06:00 crc kubenswrapper[5058]: I1014 07:06:00.614299 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d49c4d845-wxq85"] Oct 14 07:06:00 crc kubenswrapper[5058]: I1014 07:06:00.614864 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" podUID="fc2af502-68db-46c4-bc9b-b62a50830a61" containerName="dnsmasq-dns" containerID="cri-o://22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085" gracePeriod=10 Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.061849 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.168061 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4bg\" (UniqueName: \"kubernetes.io/projected/fc2af502-68db-46c4-bc9b-b62a50830a61-kube-api-access-7j4bg\") pod \"fc2af502-68db-46c4-bc9b-b62a50830a61\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.168195 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-dns-svc\") pod \"fc2af502-68db-46c4-bc9b-b62a50830a61\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.168215 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-config\") pod \"fc2af502-68db-46c4-bc9b-b62a50830a61\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.168275 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-sb\") pod \"fc2af502-68db-46c4-bc9b-b62a50830a61\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.168313 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-nb\") pod \"fc2af502-68db-46c4-bc9b-b62a50830a61\" (UID: \"fc2af502-68db-46c4-bc9b-b62a50830a61\") " Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.174434 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2af502-68db-46c4-bc9b-b62a50830a61-kube-api-access-7j4bg" (OuterVolumeSpecName: "kube-api-access-7j4bg") pod "fc2af502-68db-46c4-bc9b-b62a50830a61" (UID: "fc2af502-68db-46c4-bc9b-b62a50830a61"). InnerVolumeSpecName "kube-api-access-7j4bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.204924 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc2af502-68db-46c4-bc9b-b62a50830a61" (UID: "fc2af502-68db-46c4-bc9b-b62a50830a61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.212676 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc2af502-68db-46c4-bc9b-b62a50830a61" (UID: "fc2af502-68db-46c4-bc9b-b62a50830a61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.213208 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fc2af502-68db-46c4-bc9b-b62a50830a61" (UID: "fc2af502-68db-46c4-bc9b-b62a50830a61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.216867 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-config" (OuterVolumeSpecName: "config") pod "fc2af502-68db-46c4-bc9b-b62a50830a61" (UID: "fc2af502-68db-46c4-bc9b-b62a50830a61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.270622 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.270667 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j4bg\" (UniqueName: \"kubernetes.io/projected/fc2af502-68db-46c4-bc9b-b62a50830a61-kube-api-access-7j4bg\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.270683 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.270697 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.270710 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2af502-68db-46c4-bc9b-b62a50830a61-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.552190 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b753342c-4a7e-4bf6-809a-3c5bc083ba6a","Type":"ContainerStarted","Data":"c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4"} Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.553427 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.554907 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59f969a6-6fea-40c8-9254-284205f5b3ea","Type":"ContainerStarted","Data":"006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d"} Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.555358 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.557532 5058 generic.go:334] "Generic (PLEG): container finished" podID="fc2af502-68db-46c4-bc9b-b62a50830a61" containerID="22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085" exitCode=0 Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.557564 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.557581 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" event={"ID":"fc2af502-68db-46c4-bc9b-b62a50830a61","Type":"ContainerDied","Data":"22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085"} Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.557914 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d49c4d845-wxq85" event={"ID":"fc2af502-68db-46c4-bc9b-b62a50830a61","Type":"ContainerDied","Data":"ca8ffc943903e457069bc95778926481f055880a23a6398ca072569356b221c8"} Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.557967 5058 scope.go:117] "RemoveContainer" containerID="22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.585453 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.136448167 podStartE2EDuration="57.585430243s" podCreationTimestamp="2025-10-14 07:05:04 +0000 UTC" firstStartedPulling="2025-10-14 07:05:05.89822976 +0000 UTC m=+1053.809313556" lastFinishedPulling="2025-10-14 07:05:26.347211826 +0000 UTC m=+1074.258295632" observedRunningTime="2025-10-14 07:06:01.580335647 +0000 UTC m=+1109.491419473" watchObservedRunningTime="2025-10-14 07:06:01.585430243 +0000 UTC m=+1109.496514069" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.608082 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.388556323 podStartE2EDuration="58.608063311s" podCreationTimestamp="2025-10-14 07:05:03 +0000 UTC" firstStartedPulling="2025-10-14 07:05:05.256550975 +0000 UTC m=+1053.167634781" lastFinishedPulling="2025-10-14 07:05:26.476057963 +0000 UTC m=+1074.387141769" observedRunningTime="2025-10-14 07:06:01.602185163 +0000 UTC m=+1109.513268989" watchObservedRunningTime="2025-10-14 07:06:01.608063311 +0000 UTC m=+1109.519147117" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.650180 5058 scope.go:117] "RemoveContainer" containerID="6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.667736 5058 scope.go:117] "RemoveContainer" containerID="22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085" Oct 14 07:06:01 crc kubenswrapper[5058]: E1014 07:06:01.668210 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085\": container with ID starting with 22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085 not found: ID does not exist" containerID="22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.668238 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085"} err="failed to get container status \"22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085\": rpc error: code = NotFound desc = could not find container \"22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085\": container with ID starting with 22192f5800b3e10a8fa3e1341ea2ae33dc35170823be8488db0cb8bf8fab1085 not found: ID does not exist" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.668259 5058 scope.go:117] "RemoveContainer" containerID="6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46" Oct 14 07:06:01 crc kubenswrapper[5058]: E1014 07:06:01.668555 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46\": container with ID starting with 6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46 not found: ID does not exist" containerID="6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.668572 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46"} err="failed to get container status \"6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46\": rpc error: code = NotFound desc = could not find container \"6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46\": container with ID starting with 6b406b9dae8b5bbf2a3badcc15971c4ffd79c8b14fdd66d5fd8c5f636e89aa46 not found: ID does not exist" Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.671925 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d49c4d845-wxq85"] Oct 14 07:06:01 crc kubenswrapper[5058]: I1014 07:06:01.677207 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d49c4d845-wxq85"] Oct 14 07:06:02 crc kubenswrapper[5058]: I1014 07:06:02.585339 5058 generic.go:334] "Generic (PLEG): container finished" podID="7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" containerID="215a9a37d4d6e18d5aa8de66dd691d94290c736de1390cea3886c366d4223a76" exitCode=0 Oct 14 07:06:02 crc kubenswrapper[5058]: I1014 07:06:02.585461 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kd74" event={"ID":"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59","Type":"ContainerDied","Data":"215a9a37d4d6e18d5aa8de66dd691d94290c736de1390cea3886c366d4223a76"} Oct 14 07:06:02 crc kubenswrapper[5058]: I1014 07:06:02.800705 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2af502-68db-46c4-bc9b-b62a50830a61" path="/var/lib/kubelet/pods/fc2af502-68db-46c4-bc9b-b62a50830a61/volumes" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.655735 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.656135 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.675894 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3d16-account-create-9vjkw"] Oct 14 07:06:03 crc kubenswrapper[5058]: E1014 07:06:03.676195 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2af502-68db-46c4-bc9b-b62a50830a61" containerName="dnsmasq-dns" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.676210 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2af502-68db-46c4-bc9b-b62a50830a61" containerName="dnsmasq-dns" Oct 14 07:06:03 crc kubenswrapper[5058]: E1014 07:06:03.676225 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2af502-68db-46c4-bc9b-b62a50830a61" containerName="init" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.676232 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2af502-68db-46c4-bc9b-b62a50830a61" containerName="init" Oct 14 07:06:03 crc kubenswrapper[5058]: E1014 07:06:03.676252 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a95c82c-56a0-469f-bafc-f7beb19a72ba" containerName="mariadb-database-create" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.676259 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a95c82c-56a0-469f-bafc-f7beb19a72ba" containerName="mariadb-database-create" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.676422 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a95c82c-56a0-469f-bafc-f7beb19a72ba" containerName="mariadb-database-create" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.676435 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2af502-68db-46c4-bc9b-b62a50830a61" containerName="dnsmasq-dns" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.676948 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3d16-account-create-9vjkw" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.679407 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.690994 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3d16-account-create-9vjkw"] Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.813818 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qp4x\" (UniqueName: \"kubernetes.io/projected/0599ed33-679e-405e-a658-1abaf1a957aa-kube-api-access-7qp4x\") pod \"glance-3d16-account-create-9vjkw\" (UID: \"0599ed33-679e-405e-a658-1abaf1a957aa\") " pod="openstack/glance-3d16-account-create-9vjkw" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.916097 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qp4x\" (UniqueName: \"kubernetes.io/projected/0599ed33-679e-405e-a658-1abaf1a957aa-kube-api-access-7qp4x\") pod \"glance-3d16-account-create-9vjkw\" (UID: \"0599ed33-679e-405e-a658-1abaf1a957aa\") " pod="openstack/glance-3d16-account-create-9vjkw" Oct 14 07:06:03 crc kubenswrapper[5058]: I1014 07:06:03.957310 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qp4x\" (UniqueName: \"kubernetes.io/projected/0599ed33-679e-405e-a658-1abaf1a957aa-kube-api-access-7qp4x\") pod \"glance-3d16-account-create-9vjkw\" (UID: \"0599ed33-679e-405e-a658-1abaf1a957aa\") " pod="openstack/glance-3d16-account-create-9vjkw" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.018048 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.028934 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3d16-account-create-9vjkw" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.118428 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-dispersionconf\") pod \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.118779 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-scripts\") pod \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.118898 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq7zj\" (UniqueName: \"kubernetes.io/projected/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-kube-api-access-wq7zj\") pod \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.118918 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-swiftconf\") pod \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.119416 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-combined-ca-bundle\") pod \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.119486 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-etc-swift\") pod \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.119507 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-ring-data-devices\") pod \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\" (UID: \"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59\") " Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.120177 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" (UID: "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.121083 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" (UID: "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.121460 5058 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.121471 5058 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.128161 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-kube-api-access-wq7zj" (OuterVolumeSpecName: "kube-api-access-wq7zj") pod "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" (UID: "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59"). InnerVolumeSpecName "kube-api-access-wq7zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.130899 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" (UID: "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.145230 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-scripts" (OuterVolumeSpecName: "scripts") pod "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" (UID: "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.156725 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" (UID: "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.167966 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" (UID: "7a4206c6-4aa9-49a8-b6fb-c88828d6eb59"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.222516 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.222782 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq7zj\" (UniqueName: \"kubernetes.io/projected/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-kube-api-access-wq7zj\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.222810 5058 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.222820 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.222830 5058 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.492224 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3d16-account-create-9vjkw"] Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.654224 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3d16-account-create-9vjkw" event={"ID":"0599ed33-679e-405e-a658-1abaf1a957aa","Type":"ContainerStarted","Data":"078270d91bea9cf71428743354ee59b1d8fa1ef35926a157089edeb6eb699e29"} Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.654663 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3d16-account-create-9vjkw" event={"ID":"0599ed33-679e-405e-a658-1abaf1a957aa","Type":"ContainerStarted","Data":"508eca75be22dfec34345df06ced8e964d54d581bbfc4cbac2f83b48c4baee3b"} Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.656781 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7kd74" event={"ID":"7a4206c6-4aa9-49a8-b6fb-c88828d6eb59","Type":"ContainerDied","Data":"4f205f28d6af63dee0198a8201538c603c573b7aa04bbb1f6c14196bc6fc61d8"} Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.656846 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f205f28d6af63dee0198a8201538c603c573b7aa04bbb1f6c14196bc6fc61d8" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.656876 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7kd74" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.688194 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3d16-account-create-9vjkw" podStartSLOduration=1.6881730799999999 podStartE2EDuration="1.68817308s" podCreationTimestamp="2025-10-14 07:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:06:04.674614972 +0000 UTC m=+1112.585698768" watchObservedRunningTime="2025-10-14 07:06:04.68817308 +0000 UTC m=+1112.599256896" Oct 14 07:06:04 crc kubenswrapper[5058]: I1014 07:06:04.793184 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4gftc" podUID="6faed14d-d25e-43b8-96db-c64b6b3feece" containerName="ovn-controller" probeResult="failure" output=< Oct 14 07:06:04 crc kubenswrapper[5058]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 07:06:04 crc kubenswrapper[5058]: > Oct 14 07:06:05 crc kubenswrapper[5058]: I1014 07:06:05.666494 5058 generic.go:334] "Generic (PLEG): container finished" podID="0599ed33-679e-405e-a658-1abaf1a957aa" containerID="078270d91bea9cf71428743354ee59b1d8fa1ef35926a157089edeb6eb699e29" exitCode=0 Oct 14 07:06:05 crc kubenswrapper[5058]: I1014 07:06:05.666540 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3d16-account-create-9vjkw" event={"ID":"0599ed33-679e-405e-a658-1abaf1a957aa","Type":"ContainerDied","Data":"078270d91bea9cf71428743354ee59b1d8fa1ef35926a157089edeb6eb699e29"} Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.088046 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3d16-account-create-9vjkw" Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.173199 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.183243 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") pod \"swift-storage-0\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " pod="openstack/swift-storage-0" Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.221599 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.274640 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qp4x\" (UniqueName: \"kubernetes.io/projected/0599ed33-679e-405e-a658-1abaf1a957aa-kube-api-access-7qp4x\") pod \"0599ed33-679e-405e-a658-1abaf1a957aa\" (UID: \"0599ed33-679e-405e-a658-1abaf1a957aa\") " Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.281025 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0599ed33-679e-405e-a658-1abaf1a957aa-kube-api-access-7qp4x" (OuterVolumeSpecName: "kube-api-access-7qp4x") pod "0599ed33-679e-405e-a658-1abaf1a957aa" (UID: "0599ed33-679e-405e-a658-1abaf1a957aa"). InnerVolumeSpecName "kube-api-access-7qp4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.376428 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qp4x\" (UniqueName: \"kubernetes.io/projected/0599ed33-679e-405e-a658-1abaf1a957aa-kube-api-access-7qp4x\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.685117 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3d16-account-create-9vjkw" event={"ID":"0599ed33-679e-405e-a658-1abaf1a957aa","Type":"ContainerDied","Data":"508eca75be22dfec34345df06ced8e964d54d581bbfc4cbac2f83b48c4baee3b"} Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.685389 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508eca75be22dfec34345df06ced8e964d54d581bbfc4cbac2f83b48c4baee3b" Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.685439 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3d16-account-create-9vjkw" Oct 14 07:06:07 crc kubenswrapper[5058]: I1014 07:06:07.798867 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.119055 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fcaa-account-create-hfmrf"] Oct 14 07:06:08 crc kubenswrapper[5058]: E1014 07:06:08.119424 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" containerName="swift-ring-rebalance" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.119437 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" containerName="swift-ring-rebalance" Oct 14 07:06:08 crc kubenswrapper[5058]: E1014 07:06:08.119468 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0599ed33-679e-405e-a658-1abaf1a957aa" containerName="mariadb-account-create" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.119475 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0599ed33-679e-405e-a658-1abaf1a957aa" containerName="mariadb-account-create" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.119653 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" containerName="swift-ring-rebalance" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.119672 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0599ed33-679e-405e-a658-1abaf1a957aa" containerName="mariadb-account-create" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.120240 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fcaa-account-create-hfmrf" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.122811 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.126682 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fcaa-account-create-hfmrf"] Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.291433 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxpb\" (UniqueName: \"kubernetes.io/projected/50f239f4-7737-4333-914b-8586ffd51201-kube-api-access-jpxpb\") pod \"keystone-fcaa-account-create-hfmrf\" (UID: \"50f239f4-7737-4333-914b-8586ffd51201\") " pod="openstack/keystone-fcaa-account-create-hfmrf" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.393945 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxpb\" (UniqueName: \"kubernetes.io/projected/50f239f4-7737-4333-914b-8586ffd51201-kube-api-access-jpxpb\") pod \"keystone-fcaa-account-create-hfmrf\" (UID: \"50f239f4-7737-4333-914b-8586ffd51201\") " pod="openstack/keystone-fcaa-account-create-hfmrf" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.419017 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxpb\" (UniqueName: \"kubernetes.io/projected/50f239f4-7737-4333-914b-8586ffd51201-kube-api-access-jpxpb\") pod \"keystone-fcaa-account-create-hfmrf\" (UID: \"50f239f4-7737-4333-914b-8586ffd51201\") " pod="openstack/keystone-fcaa-account-create-hfmrf" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.428918 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-202a-account-create-4v5vg"] Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.429984 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-202a-account-create-4v5vg" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.432503 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.438337 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fcaa-account-create-hfmrf" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.444688 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-202a-account-create-4v5vg"] Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.596866 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjsp\" (UniqueName: \"kubernetes.io/projected/93f4baf6-bf38-46e3-ac23-e4e7e8083f29-kube-api-access-6sjsp\") pod \"placement-202a-account-create-4v5vg\" (UID: \"93f4baf6-bf38-46e3-ac23-e4e7e8083f29\") " pod="openstack/placement-202a-account-create-4v5vg" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.697687 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"855a298293805aa38171276606596651b3f61ee18614f5c31ef42a881c329e46"} Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.698594 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sjsp\" (UniqueName: \"kubernetes.io/projected/93f4baf6-bf38-46e3-ac23-e4e7e8083f29-kube-api-access-6sjsp\") pod \"placement-202a-account-create-4v5vg\" (UID: \"93f4baf6-bf38-46e3-ac23-e4e7e8083f29\") " pod="openstack/placement-202a-account-create-4v5vg" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.732601 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sjsp\" (UniqueName: \"kubernetes.io/projected/93f4baf6-bf38-46e3-ac23-e4e7e8083f29-kube-api-access-6sjsp\") pod \"placement-202a-account-create-4v5vg\" (UID: \"93f4baf6-bf38-46e3-ac23-e4e7e8083f29\") " pod="openstack/placement-202a-account-create-4v5vg" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.819019 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-f2gf4"] Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.820233 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.823929 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-m27gd" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.824026 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.830472 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-f2gf4"] Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.848335 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-202a-account-create-4v5vg" Oct 14 07:06:08 crc kubenswrapper[5058]: I1014 07:06:08.896844 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fcaa-account-create-hfmrf"] Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.002550 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-db-sync-config-data\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.002606 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-combined-ca-bundle\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.002638 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-config-data\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.002750 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tjb\" (UniqueName: \"kubernetes.io/projected/3232b896-82b9-4d80-b74a-2e860e05ffbc-kube-api-access-k9tjb\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.103997 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-db-sync-config-data\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.104068 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-combined-ca-bundle\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.104109 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-config-data\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.104156 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9tjb\" (UniqueName: \"kubernetes.io/projected/3232b896-82b9-4d80-b74a-2e860e05ffbc-kube-api-access-k9tjb\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.121562 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-db-sync-config-data\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.121776 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-config-data\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.122438 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9tjb\" (UniqueName: \"kubernetes.io/projected/3232b896-82b9-4d80-b74a-2e860e05ffbc-kube-api-access-k9tjb\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.125525 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-combined-ca-bundle\") pod \"glance-db-sync-f2gf4\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.136440 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.317094 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-202a-account-create-4v5vg"] Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.705141 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"06b49fc229b22caf4d176c8d244f32c702aa60ee0c1bc3f0ee68302b65265d19"} Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.706991 5058 generic.go:334] "Generic (PLEG): container finished" podID="50f239f4-7737-4333-914b-8586ffd51201" containerID="2f8dbfbad7c8bbb9c4315a71e9431f0677a4df9a5188f3ff63814f8bf788017e" exitCode=0 Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.707051 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fcaa-account-create-hfmrf" event={"ID":"50f239f4-7737-4333-914b-8586ffd51201","Type":"ContainerDied","Data":"2f8dbfbad7c8bbb9c4315a71e9431f0677a4df9a5188f3ff63814f8bf788017e"} Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.707078 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fcaa-account-create-hfmrf" event={"ID":"50f239f4-7737-4333-914b-8586ffd51201","Type":"ContainerStarted","Data":"8be10c63ac361f9134bed57ac3c8e829eb0f3ccbb8532c1e6dfde327d80ea179"} Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.708379 5058 generic.go:334] "Generic (PLEG): container finished" podID="93f4baf6-bf38-46e3-ac23-e4e7e8083f29" containerID="e486cd47d1cab01f6ead3b605ca5d0c12f1f0966f60d7d5f68b287aaa532a10c" exitCode=0 Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.708410 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-202a-account-create-4v5vg" event={"ID":"93f4baf6-bf38-46e3-ac23-e4e7e8083f29","Type":"ContainerDied","Data":"e486cd47d1cab01f6ead3b605ca5d0c12f1f0966f60d7d5f68b287aaa532a10c"} Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.708427 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-202a-account-create-4v5vg" event={"ID":"93f4baf6-bf38-46e3-ac23-e4e7e8083f29","Type":"ContainerStarted","Data":"ed0c66c535c62ea802a1de5d4c5e1483dddd20f62c477758d72e0e99fa47e717"} Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.790062 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4gftc" podUID="6faed14d-d25e-43b8-96db-c64b6b3feece" containerName="ovn-controller" probeResult="failure" output=< Oct 14 07:06:09 crc kubenswrapper[5058]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 07:06:09 crc kubenswrapper[5058]: > Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.850312 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.850381 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:06:09 crc kubenswrapper[5058]: I1014 07:06:09.946746 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-f2gf4"] Oct 14 07:06:09 crc kubenswrapper[5058]: W1014 07:06:09.956288 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3232b896_82b9_4d80_b74a_2e860e05ffbc.slice/crio-69cbf5013abd9618ba1973991a239b833c24a482e23e53afaaad63635e1d4765 WatchSource:0}: Error finding container 69cbf5013abd9618ba1973991a239b833c24a482e23e53afaaad63635e1d4765: Status 404 returned error can't find the container with id 69cbf5013abd9618ba1973991a239b833c24a482e23e53afaaad63635e1d4765 Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.045988 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4gftc-config-xwvkm"] Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.046997 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.048852 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.057181 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gftc-config-xwvkm"] Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.223852 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run-ovn\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.223958 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqp8m\" (UniqueName: \"kubernetes.io/projected/ebd5fe45-85d7-4b77-867e-7ded975daab1-kube-api-access-zqp8m\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.224017 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.224176 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-additional-scripts\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.224290 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-scripts\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.224402 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-log-ovn\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.325864 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run-ovn\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.325947 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqp8m\" (UniqueName: \"kubernetes.io/projected/ebd5fe45-85d7-4b77-867e-7ded975daab1-kube-api-access-zqp8m\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.325979 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.326010 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-additional-scripts\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.326072 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-scripts\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.326145 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-log-ovn\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.326152 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run-ovn\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.326899 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-log-ovn\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.326975 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.327755 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-additional-scripts\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.328553 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-scripts\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.364210 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqp8m\" (UniqueName: \"kubernetes.io/projected/ebd5fe45-85d7-4b77-867e-7ded975daab1-kube-api-access-zqp8m\") pod \"ovn-controller-4gftc-config-xwvkm\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.396116 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.720369 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"54425f7ca236aef845e21698b62205fe14abb46d0ffcc6865557173aab0f934a"} Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.720625 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"a3e3288d52021b1c5dee76e985c4bf2ef23c6520d23e5ef4eb02fe977bc949b4"} Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.720634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"6d0292a646c2aabd8315d14682a37635c0746d89b568115520d8211b6b5b0c40"} Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.721944 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f2gf4" event={"ID":"3232b896-82b9-4d80-b74a-2e860e05ffbc","Type":"ContainerStarted","Data":"69cbf5013abd9618ba1973991a239b833c24a482e23e53afaaad63635e1d4765"} Oct 14 07:06:10 crc kubenswrapper[5058]: I1014 07:06:10.903756 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gftc-config-xwvkm"] Oct 14 07:06:10 crc kubenswrapper[5058]: W1014 07:06:10.926741 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebd5fe45_85d7_4b77_867e_7ded975daab1.slice/crio-1fffb19b3f35a6ad240b5fb36202e8ad022d1b6f406f2a61871dc7edfe85cd23 WatchSource:0}: Error finding container 1fffb19b3f35a6ad240b5fb36202e8ad022d1b6f406f2a61871dc7edfe85cd23: Status 404 returned error can't find the container with id 1fffb19b3f35a6ad240b5fb36202e8ad022d1b6f406f2a61871dc7edfe85cd23 Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.012920 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fcaa-account-create-hfmrf" Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.147698 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpxpb\" (UniqueName: \"kubernetes.io/projected/50f239f4-7737-4333-914b-8586ffd51201-kube-api-access-jpxpb\") pod \"50f239f4-7737-4333-914b-8586ffd51201\" (UID: \"50f239f4-7737-4333-914b-8586ffd51201\") " Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.158067 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f239f4-7737-4333-914b-8586ffd51201-kube-api-access-jpxpb" (OuterVolumeSpecName: "kube-api-access-jpxpb") pod "50f239f4-7737-4333-914b-8586ffd51201" (UID: "50f239f4-7737-4333-914b-8586ffd51201"). InnerVolumeSpecName "kube-api-access-jpxpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.211594 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-202a-account-create-4v5vg" Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.253104 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpxpb\" (UniqueName: \"kubernetes.io/projected/50f239f4-7737-4333-914b-8586ffd51201-kube-api-access-jpxpb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.354752 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sjsp\" (UniqueName: \"kubernetes.io/projected/93f4baf6-bf38-46e3-ac23-e4e7e8083f29-kube-api-access-6sjsp\") pod \"93f4baf6-bf38-46e3-ac23-e4e7e8083f29\" (UID: \"93f4baf6-bf38-46e3-ac23-e4e7e8083f29\") " Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.361550 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f4baf6-bf38-46e3-ac23-e4e7e8083f29-kube-api-access-6sjsp" (OuterVolumeSpecName: "kube-api-access-6sjsp") pod "93f4baf6-bf38-46e3-ac23-e4e7e8083f29" (UID: "93f4baf6-bf38-46e3-ac23-e4e7e8083f29"). InnerVolumeSpecName "kube-api-access-6sjsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.456988 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sjsp\" (UniqueName: \"kubernetes.io/projected/93f4baf6-bf38-46e3-ac23-e4e7e8083f29-kube-api-access-6sjsp\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.733387 5058 generic.go:334] "Generic (PLEG): container finished" podID="ebd5fe45-85d7-4b77-867e-7ded975daab1" containerID="be1f3befeb6a58f525d18137fbcd6f385642563af1087f8abe45a5caa94ac3e4" exitCode=0 Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.733513 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc-config-xwvkm" event={"ID":"ebd5fe45-85d7-4b77-867e-7ded975daab1","Type":"ContainerDied","Data":"be1f3befeb6a58f525d18137fbcd6f385642563af1087f8abe45a5caa94ac3e4"} Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.733559 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc-config-xwvkm" event={"ID":"ebd5fe45-85d7-4b77-867e-7ded975daab1","Type":"ContainerStarted","Data":"1fffb19b3f35a6ad240b5fb36202e8ad022d1b6f406f2a61871dc7edfe85cd23"} Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.736751 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-202a-account-create-4v5vg" event={"ID":"93f4baf6-bf38-46e3-ac23-e4e7e8083f29","Type":"ContainerDied","Data":"ed0c66c535c62ea802a1de5d4c5e1483dddd20f62c477758d72e0e99fa47e717"} Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.736814 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed0c66c535c62ea802a1de5d4c5e1483dddd20f62c477758d72e0e99fa47e717" Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.736883 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-202a-account-create-4v5vg" Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.740868 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"431d79aac23d72bdbe76355998d8a1a48b5be7faa4f9c5fcb39335a05f1f4018"} Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.741032 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"1597a4004ccd7362f11728ae0a39556e442ff02bd29578ce9c6c13e045eb6f2a"} Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.742029 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fcaa-account-create-hfmrf" event={"ID":"50f239f4-7737-4333-914b-8586ffd51201","Type":"ContainerDied","Data":"8be10c63ac361f9134bed57ac3c8e829eb0f3ccbb8532c1e6dfde327d80ea179"} Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.742116 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be10c63ac361f9134bed57ac3c8e829eb0f3ccbb8532c1e6dfde327d80ea179" Oct 14 07:06:11 crc kubenswrapper[5058]: I1014 07:06:11.742210 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fcaa-account-create-hfmrf" Oct 14 07:06:12 crc kubenswrapper[5058]: I1014 07:06:12.760538 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"96660315f94d74277f9b50658993503a1b43f9bef9a95f56461f122ccec07ad5"} Oct 14 07:06:12 crc kubenswrapper[5058]: I1014 07:06:12.761127 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"537911dbd262660ec60f6e9b0405d1a9f83ab10683dad7e3074f39dc6c1a11e4"} Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.111526 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.290479 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run-ovn\") pod \"ebd5fe45-85d7-4b77-867e-7ded975daab1\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.290752 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-scripts\") pod \"ebd5fe45-85d7-4b77-867e-7ded975daab1\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.290745 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ebd5fe45-85d7-4b77-867e-7ded975daab1" (UID: "ebd5fe45-85d7-4b77-867e-7ded975daab1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.290791 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-log-ovn\") pod \"ebd5fe45-85d7-4b77-867e-7ded975daab1\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.290835 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqp8m\" (UniqueName: \"kubernetes.io/projected/ebd5fe45-85d7-4b77-867e-7ded975daab1-kube-api-access-zqp8m\") pod \"ebd5fe45-85d7-4b77-867e-7ded975daab1\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.290910 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run\") pod \"ebd5fe45-85d7-4b77-867e-7ded975daab1\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.290950 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-additional-scripts\") pod \"ebd5fe45-85d7-4b77-867e-7ded975daab1\" (UID: \"ebd5fe45-85d7-4b77-867e-7ded975daab1\") " Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.291308 5058 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.291859 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ebd5fe45-85d7-4b77-867e-7ded975daab1" (UID: "ebd5fe45-85d7-4b77-867e-7ded975daab1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.291922 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run" (OuterVolumeSpecName: "var-run") pod "ebd5fe45-85d7-4b77-867e-7ded975daab1" (UID: "ebd5fe45-85d7-4b77-867e-7ded975daab1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.292115 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-scripts" (OuterVolumeSpecName: "scripts") pod "ebd5fe45-85d7-4b77-867e-7ded975daab1" (UID: "ebd5fe45-85d7-4b77-867e-7ded975daab1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.292337 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ebd5fe45-85d7-4b77-867e-7ded975daab1" (UID: "ebd5fe45-85d7-4b77-867e-7ded975daab1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.297858 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd5fe45-85d7-4b77-867e-7ded975daab1-kube-api-access-zqp8m" (OuterVolumeSpecName: "kube-api-access-zqp8m") pod "ebd5fe45-85d7-4b77-867e-7ded975daab1" (UID: "ebd5fe45-85d7-4b77-867e-7ded975daab1"). InnerVolumeSpecName "kube-api-access-zqp8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.398567 5058 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.398641 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqp8m\" (UniqueName: \"kubernetes.io/projected/ebd5fe45-85d7-4b77-867e-7ded975daab1-kube-api-access-zqp8m\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.398663 5058 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebd5fe45-85d7-4b77-867e-7ded975daab1-var-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.398759 5058 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.398779 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebd5fe45-85d7-4b77-867e-7ded975daab1-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.772630 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc-config-xwvkm" event={"ID":"ebd5fe45-85d7-4b77-867e-7ded975daab1","Type":"ContainerDied","Data":"1fffb19b3f35a6ad240b5fb36202e8ad022d1b6f406f2a61871dc7edfe85cd23"} Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.772665 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fffb19b3f35a6ad240b5fb36202e8ad022d1b6f406f2a61871dc7edfe85cd23" Oct 14 07:06:13 crc kubenswrapper[5058]: I1014 07:06:13.772683 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc-config-xwvkm" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.212206 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4gftc-config-xwvkm"] Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.220083 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4gftc-config-xwvkm"] Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.305367 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4gftc-config-9vkn4"] Oct 14 07:06:14 crc kubenswrapper[5058]: E1014 07:06:14.305688 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f239f4-7737-4333-914b-8586ffd51201" containerName="mariadb-account-create" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.305706 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f239f4-7737-4333-914b-8586ffd51201" containerName="mariadb-account-create" Oct 14 07:06:14 crc kubenswrapper[5058]: E1014 07:06:14.305733 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd5fe45-85d7-4b77-867e-7ded975daab1" containerName="ovn-config" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.305740 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd5fe45-85d7-4b77-867e-7ded975daab1" containerName="ovn-config" Oct 14 07:06:14 crc kubenswrapper[5058]: E1014 07:06:14.305751 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f4baf6-bf38-46e3-ac23-e4e7e8083f29" containerName="mariadb-account-create" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.305757 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f4baf6-bf38-46e3-ac23-e4e7e8083f29" containerName="mariadb-account-create" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.305944 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f239f4-7737-4333-914b-8586ffd51201" containerName="mariadb-account-create" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.305973 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f4baf6-bf38-46e3-ac23-e4e7e8083f29" containerName="mariadb-account-create" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.305991 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd5fe45-85d7-4b77-867e-7ded975daab1" containerName="ovn-config" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.306482 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.308748 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.358810 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run-ovn\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.358898 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-additional-scripts\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.358923 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb6bh\" (UniqueName: \"kubernetes.io/projected/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-kube-api-access-nb6bh\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.358977 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-log-ovn\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.358999 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.359036 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-scripts\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.361596 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gftc-config-9vkn4"] Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.461715 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run-ovn\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.461853 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-additional-scripts\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.461882 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb6bh\" (UniqueName: \"kubernetes.io/projected/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-kube-api-access-nb6bh\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.461941 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-log-ovn\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.461967 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.462004 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-scripts\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.462042 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run-ovn\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.462167 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.462206 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-log-ovn\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.462526 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-additional-scripts\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.463770 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-scripts\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.485465 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb6bh\" (UniqueName: \"kubernetes.io/projected/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-kube-api-access-nb6bh\") pod \"ovn-controller-4gftc-config-9vkn4\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.655514 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.744598 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.860328 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd5fe45-85d7-4b77-867e-7ded975daab1" path="/var/lib/kubelet/pods/ebd5fe45-85d7-4b77-867e-7ded975daab1/volumes" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.861224 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"33d2eddcb88d038d345ef3bf102f653431e0f85ff697aaaf69ad45de80bb0ebc"} Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.861283 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4gftc" Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.861293 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"509b41a491460621f4e48c030ec3096e244144015e697f031ef0d4a24b2efb95"} Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.861302 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"6e399dec260edbe3841b098d9b9f1bf0dfe804d3a5aa01facc0ec4b2d640b130"} Oct 14 07:06:14 crc kubenswrapper[5058]: I1014 07:06:14.861312 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"343434d65ccc14910a7fcbead2fb88c5672e280565333bc8a1847ece2eded17c"} Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.097533 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ds9g9"] Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.103597 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ds9g9" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.110732 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ds9g9"] Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.207062 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rtt7h"] Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.208302 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rtt7h" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.221601 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rtt7h"] Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.246573 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4gftc-config-9vkn4"] Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.280764 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m8gv\" (UniqueName: \"kubernetes.io/projected/2c5df098-55c4-4c1a-8872-883e1f9c3929-kube-api-access-7m8gv\") pod \"cinder-db-create-ds9g9\" (UID: \"2c5df098-55c4-4c1a-8872-883e1f9c3929\") " pod="openstack/cinder-db-create-ds9g9" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.366972 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.382035 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m8gv\" (UniqueName: \"kubernetes.io/projected/2c5df098-55c4-4c1a-8872-883e1f9c3929-kube-api-access-7m8gv\") pod \"cinder-db-create-ds9g9\" (UID: \"2c5df098-55c4-4c1a-8872-883e1f9c3929\") " pod="openstack/cinder-db-create-ds9g9" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.382091 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6hb\" (UniqueName: \"kubernetes.io/projected/87d4c8ac-b4bc-440a-bb9d-3721475b9fae-kube-api-access-dz6hb\") pod \"barbican-db-create-rtt7h\" (UID: \"87d4c8ac-b4bc-440a-bb9d-3721475b9fae\") " pod="openstack/barbican-db-create-rtt7h" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.404827 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m8gv\" (UniqueName: \"kubernetes.io/projected/2c5df098-55c4-4c1a-8872-883e1f9c3929-kube-api-access-7m8gv\") pod \"cinder-db-create-ds9g9\" (UID: \"2c5df098-55c4-4c1a-8872-883e1f9c3929\") " pod="openstack/cinder-db-create-ds9g9" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.431053 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ds9g9" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.469746 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-dfvvs"] Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.476530 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.480689 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.480772 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xv7zn" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.480855 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.481515 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.485842 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dfvvs"] Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.490328 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6hb\" (UniqueName: \"kubernetes.io/projected/87d4c8ac-b4bc-440a-bb9d-3721475b9fae-kube-api-access-dz6hb\") pod \"barbican-db-create-rtt7h\" (UID: \"87d4c8ac-b4bc-440a-bb9d-3721475b9fae\") " pod="openstack/barbican-db-create-rtt7h" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.529394 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6hb\" (UniqueName: \"kubernetes.io/projected/87d4c8ac-b4bc-440a-bb9d-3721475b9fae-kube-api-access-dz6hb\") pod \"barbican-db-create-rtt7h\" (UID: \"87d4c8ac-b4bc-440a-bb9d-3721475b9fae\") " pod="openstack/barbican-db-create-rtt7h" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.536307 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qmtsb"] Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.537282 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qmtsb" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.541659 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qmtsb"] Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.591414 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-config-data\") pod \"keystone-db-sync-dfvvs\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.591636 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rq4\" (UniqueName: \"kubernetes.io/projected/7ff99534-a94e-4306-ae3b-2381a0b8d029-kube-api-access-k9rq4\") pod \"keystone-db-sync-dfvvs\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.591668 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-combined-ca-bundle\") pod \"keystone-db-sync-dfvvs\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.693545 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px66q\" (UniqueName: \"kubernetes.io/projected/bab845f3-015e-4520-b4cf-56394290c253-kube-api-access-px66q\") pod \"neutron-db-create-qmtsb\" (UID: \"bab845f3-015e-4520-b4cf-56394290c253\") " pod="openstack/neutron-db-create-qmtsb" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.693629 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-config-data\") pod \"keystone-db-sync-dfvvs\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.693655 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9rq4\" (UniqueName: \"kubernetes.io/projected/7ff99534-a94e-4306-ae3b-2381a0b8d029-kube-api-access-k9rq4\") pod \"keystone-db-sync-dfvvs\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.693687 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-combined-ca-bundle\") pod \"keystone-db-sync-dfvvs\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.700192 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-combined-ca-bundle\") pod \"keystone-db-sync-dfvvs\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.705964 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-config-data\") pod \"keystone-db-sync-dfvvs\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.708319 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9rq4\" (UniqueName: \"kubernetes.io/projected/7ff99534-a94e-4306-ae3b-2381a0b8d029-kube-api-access-k9rq4\") pod \"keystone-db-sync-dfvvs\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.795701 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px66q\" (UniqueName: \"kubernetes.io/projected/bab845f3-015e-4520-b4cf-56394290c253-kube-api-access-px66q\") pod \"neutron-db-create-qmtsb\" (UID: \"bab845f3-015e-4520-b4cf-56394290c253\") " pod="openstack/neutron-db-create-qmtsb" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.812717 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px66q\" (UniqueName: \"kubernetes.io/projected/bab845f3-015e-4520-b4cf-56394290c253-kube-api-access-px66q\") pod \"neutron-db-create-qmtsb\" (UID: \"bab845f3-015e-4520-b4cf-56394290c253\") " pod="openstack/neutron-db-create-qmtsb" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.823524 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rtt7h" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.835893 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.840733 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"f3d2c8c9184a355d31a5639a1537fe51d6e252644fd1b6581a47121a0d587325"} Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.840772 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"f29643cf82745f7e73e78c4135a5914e8cf5724cd7128107573a6ee8be46e9b0"} Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.840782 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerStarted","Data":"3f71ae0f3608a3c488d1248099617a68fad24788a23a6eb403537e7c705bc211"} Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.844216 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc-config-9vkn4" event={"ID":"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502","Type":"ContainerStarted","Data":"0f272645c88a90b733a331db5ccb33b7b4626945cf20138c653e291f8cc2f38d"} Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.844239 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc-config-9vkn4" event={"ID":"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502","Type":"ContainerStarted","Data":"d93a36aea17103c4d6280811d9d36314cd70236949a72dafa8f37377e3c43176"} Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.863266 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qmtsb" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.906692 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.944526186 podStartE2EDuration="25.906659161s" podCreationTimestamp="2025-10-14 07:05:50 +0000 UTC" firstStartedPulling="2025-10-14 07:06:07.804895596 +0000 UTC m=+1115.715979402" lastFinishedPulling="2025-10-14 07:06:13.767028571 +0000 UTC m=+1121.678112377" observedRunningTime="2025-10-14 07:06:15.896293994 +0000 UTC m=+1123.807377820" watchObservedRunningTime="2025-10-14 07:06:15.906659161 +0000 UTC m=+1123.817742967" Oct 14 07:06:15 crc kubenswrapper[5058]: I1014 07:06:15.945832 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ds9g9"] Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.198523 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d9ddcb47c-s7r8b"] Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.207339 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.210493 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.217878 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d9ddcb47c-s7r8b"] Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.307690 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-sb\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.307755 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-svc\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.307912 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-config\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.308102 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-swift-storage-0\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.308182 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-nb\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.308285 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9jxx\" (UniqueName: \"kubernetes.io/projected/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-kube-api-access-s9jxx\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.329134 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rtt7h"] Oct 14 07:06:16 crc kubenswrapper[5058]: W1014 07:06:16.339550 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87d4c8ac_b4bc_440a_bb9d_3721475b9fae.slice/crio-a68fb2aa8c1ac0a99f02f71906b9cd87315fecaf6ec2f0e22619249ecadf53f9 WatchSource:0}: Error finding container a68fb2aa8c1ac0a99f02f71906b9cd87315fecaf6ec2f0e22619249ecadf53f9: Status 404 returned error can't find the container with id a68fb2aa8c1ac0a99f02f71906b9cd87315fecaf6ec2f0e22619249ecadf53f9 Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.410231 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-config\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.410288 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-swift-storage-0\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.410318 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-nb\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.410354 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9jxx\" (UniqueName: \"kubernetes.io/projected/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-kube-api-access-s9jxx\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.410394 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-sb\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.410414 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-svc\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.411175 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-svc\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.411195 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-config\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.411425 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-nb\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.411767 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-swift-storage-0\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.411971 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-sb\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.428641 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9jxx\" (UniqueName: \"kubernetes.io/projected/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-kube-api-access-s9jxx\") pod \"dnsmasq-dns-d9ddcb47c-s7r8b\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.470917 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dfvvs"] Oct 14 07:06:16 crc kubenswrapper[5058]: W1014 07:06:16.478860 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ff99534_a94e_4306_ae3b_2381a0b8d029.slice/crio-fa625ebc874e0d731c4eae243d36eeae58cca83cf32f2b08d6dc004a38bd5280 WatchSource:0}: Error finding container fa625ebc874e0d731c4eae243d36eeae58cca83cf32f2b08d6dc004a38bd5280: Status 404 returned error can't find the container with id fa625ebc874e0d731c4eae243d36eeae58cca83cf32f2b08d6dc004a38bd5280 Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.501738 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qmtsb"] Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.527543 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:16 crc kubenswrapper[5058]: W1014 07:06:16.534158 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbab845f3_015e_4520_b4cf_56394290c253.slice/crio-86d395e57ec1d5e434544a78d6593cf266ee110c85b60db4d9ad86cc977b1b61 WatchSource:0}: Error finding container 86d395e57ec1d5e434544a78d6593cf266ee110c85b60db4d9ad86cc977b1b61: Status 404 returned error can't find the container with id 86d395e57ec1d5e434544a78d6593cf266ee110c85b60db4d9ad86cc977b1b61 Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.779911 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d9ddcb47c-s7r8b"] Oct 14 07:06:16 crc kubenswrapper[5058]: W1014 07:06:16.810046 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf8ea136_fa18_4c7c_92cf_67f1ae3c55a9.slice/crio-d44374347fede8a77c6dcecb0a1de3b895c837600114d0954d7b423f09480169 WatchSource:0}: Error finding container d44374347fede8a77c6dcecb0a1de3b895c837600114d0954d7b423f09480169: Status 404 returned error can't find the container with id d44374347fede8a77c6dcecb0a1de3b895c837600114d0954d7b423f09480169 Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.856616 5058 generic.go:334] "Generic (PLEG): container finished" podID="2c5df098-55c4-4c1a-8872-883e1f9c3929" containerID="52e19a73054745e7a98abc9f1aa91d3786a1f937f02d219f7e8db46c396014d4" exitCode=0 Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.856691 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ds9g9" event={"ID":"2c5df098-55c4-4c1a-8872-883e1f9c3929","Type":"ContainerDied","Data":"52e19a73054745e7a98abc9f1aa91d3786a1f937f02d219f7e8db46c396014d4"} Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.856717 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ds9g9" event={"ID":"2c5df098-55c4-4c1a-8872-883e1f9c3929","Type":"ContainerStarted","Data":"240c5bb798120880ae6daad72d1cfb65483a75c8d48f177992161ee92e97acc9"} Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.859208 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" event={"ID":"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9","Type":"ContainerStarted","Data":"d44374347fede8a77c6dcecb0a1de3b895c837600114d0954d7b423f09480169"} Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.866879 5058 generic.go:334] "Generic (PLEG): container finished" podID="87d4c8ac-b4bc-440a-bb9d-3721475b9fae" containerID="41500401a618a5fb16403d88a375426e53aded9cff72d89c8c0891e490e3bef3" exitCode=0 Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.867056 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rtt7h" event={"ID":"87d4c8ac-b4bc-440a-bb9d-3721475b9fae","Type":"ContainerDied","Data":"41500401a618a5fb16403d88a375426e53aded9cff72d89c8c0891e490e3bef3"} Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.867083 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rtt7h" event={"ID":"87d4c8ac-b4bc-440a-bb9d-3721475b9fae","Type":"ContainerStarted","Data":"a68fb2aa8c1ac0a99f02f71906b9cd87315fecaf6ec2f0e22619249ecadf53f9"} Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.878184 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dfvvs" event={"ID":"7ff99534-a94e-4306-ae3b-2381a0b8d029","Type":"ContainerStarted","Data":"fa625ebc874e0d731c4eae243d36eeae58cca83cf32f2b08d6dc004a38bd5280"} Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.883773 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc-config-9vkn4" event={"ID":"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502","Type":"ContainerDied","Data":"0f272645c88a90b733a331db5ccb33b7b4626945cf20138c653e291f8cc2f38d"} Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.883845 5058 generic.go:334] "Generic (PLEG): container finished" podID="fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" containerID="0f272645c88a90b733a331db5ccb33b7b4626945cf20138c653e291f8cc2f38d" exitCode=0 Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.888664 5058 generic.go:334] "Generic (PLEG): container finished" podID="bab845f3-015e-4520-b4cf-56394290c253" containerID="6c1b6b6947fbcea5df829f8242ad5d14a9bbdb7a7102295008036d5e74029b73" exitCode=0 Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.890199 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qmtsb" event={"ID":"bab845f3-015e-4520-b4cf-56394290c253","Type":"ContainerDied","Data":"6c1b6b6947fbcea5df829f8242ad5d14a9bbdb7a7102295008036d5e74029b73"} Oct 14 07:06:16 crc kubenswrapper[5058]: I1014 07:06:16.890228 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qmtsb" event={"ID":"bab845f3-015e-4520-b4cf-56394290c253","Type":"ContainerStarted","Data":"86d395e57ec1d5e434544a78d6593cf266ee110c85b60db4d9ad86cc977b1b61"} Oct 14 07:06:17 crc kubenswrapper[5058]: I1014 07:06:17.902385 5058 generic.go:334] "Generic (PLEG): container finished" podID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerID="c0afe83127c3ff80e32efae5fbf2586316b4e6f66da08ad70f4139a81db5b62b" exitCode=0 Oct 14 07:06:17 crc kubenswrapper[5058]: I1014 07:06:17.903312 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" event={"ID":"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9","Type":"ContainerDied","Data":"c0afe83127c3ff80e32efae5fbf2586316b4e6f66da08ad70f4139a81db5b62b"} Oct 14 07:06:28 crc kubenswrapper[5058]: E1014 07:06:28.854825 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:839f0e593dd6b59e385ec9471f4eeaa34f1c539268588114cbc34cc9a6117835" Oct 14 07:06:28 crc kubenswrapper[5058]: E1014 07:06:28.856523 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:839f0e593dd6b59e385ec9471f4eeaa34f1c539268588114cbc34cc9a6117835,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9tjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-f2gf4_openstack(3232b896-82b9-4d80-b74a-2e860e05ffbc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 07:06:28 crc kubenswrapper[5058]: E1014 07:06:28.857764 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-f2gf4" podUID="3232b896-82b9-4d80-b74a-2e860e05ffbc" Oct 14 07:06:28 crc kubenswrapper[5058]: I1014 07:06:28.927298 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:28 crc kubenswrapper[5058]: I1014 07:06:28.933733 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ds9g9" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.016149 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ds9g9" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.016181 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ds9g9" event={"ID":"2c5df098-55c4-4c1a-8872-883e1f9c3929","Type":"ContainerDied","Data":"240c5bb798120880ae6daad72d1cfb65483a75c8d48f177992161ee92e97acc9"} Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.016222 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240c5bb798120880ae6daad72d1cfb65483a75c8d48f177992161ee92e97acc9" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.018971 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc-config-9vkn4" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.019121 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc-config-9vkn4" event={"ID":"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502","Type":"ContainerDied","Data":"d93a36aea17103c4d6280811d9d36314cd70236949a72dafa8f37377e3c43176"} Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.019150 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93a36aea17103c4d6280811d9d36314cd70236949a72dafa8f37377e3c43176" Oct 14 07:06:29 crc kubenswrapper[5058]: E1014 07:06:29.019995 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:839f0e593dd6b59e385ec9471f4eeaa34f1c539268588114cbc34cc9a6117835\\\"\"" pod="openstack/glance-db-sync-f2gf4" podUID="3232b896-82b9-4d80-b74a-2e860e05ffbc" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057001 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-log-ovn\") pod \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057084 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m8gv\" (UniqueName: \"kubernetes.io/projected/2c5df098-55c4-4c1a-8872-883e1f9c3929-kube-api-access-7m8gv\") pod \"2c5df098-55c4-4c1a-8872-883e1f9c3929\" (UID: \"2c5df098-55c4-4c1a-8872-883e1f9c3929\") " Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057081 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" (UID: "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057104 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run-ovn\") pod \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057145 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-additional-scripts\") pod \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057244 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-scripts\") pod \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057283 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb6bh\" (UniqueName: \"kubernetes.io/projected/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-kube-api-access-nb6bh\") pod \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057352 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run\") pod \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\" (UID: \"fa893d2c-eb2f-4e9f-aba8-4767b1c9d502\") " Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057600 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run" (OuterVolumeSpecName: "var-run") pod "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" (UID: "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057888 5058 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.058022 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" (UID: "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.057231 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" (UID: "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.058235 5058 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.058457 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-scripts" (OuterVolumeSpecName: "scripts") pod "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" (UID: "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.061618 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c5df098-55c4-4c1a-8872-883e1f9c3929-kube-api-access-7m8gv" (OuterVolumeSpecName: "kube-api-access-7m8gv") pod "2c5df098-55c4-4c1a-8872-883e1f9c3929" (UID: "2c5df098-55c4-4c1a-8872-883e1f9c3929"). InnerVolumeSpecName "kube-api-access-7m8gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.066201 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-kube-api-access-nb6bh" (OuterVolumeSpecName: "kube-api-access-nb6bh") pod "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" (UID: "fa893d2c-eb2f-4e9f-aba8-4767b1c9d502"). InnerVolumeSpecName "kube-api-access-nb6bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.160342 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m8gv\" (UniqueName: \"kubernetes.io/projected/2c5df098-55c4-4c1a-8872-883e1f9c3929-kube-api-access-7m8gv\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.160382 5058 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.160395 5058 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.160408 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:29 crc kubenswrapper[5058]: I1014 07:06:29.160420 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb6bh\" (UniqueName: \"kubernetes.io/projected/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502-kube-api-access-nb6bh\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:30 crc kubenswrapper[5058]: I1014 07:06:30.003015 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4gftc-config-9vkn4"] Oct 14 07:06:30 crc kubenswrapper[5058]: I1014 07:06:30.011588 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4gftc-config-9vkn4"] Oct 14 07:06:30 crc kubenswrapper[5058]: I1014 07:06:30.809028 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" path="/var/lib/kubelet/pods/fa893d2c-eb2f-4e9f-aba8-4767b1c9d502/volumes" Oct 14 07:06:31 crc kubenswrapper[5058]: E1014 07:06:31.547918 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone@sha256:f7302eb8964de699cf44da13958a8ce3c1c4c05406a6fc58b6cdcb1706b8f439" Oct 14 07:06:31 crc kubenswrapper[5058]: E1014 07:06:31.548423 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone@sha256:f7302eb8964de699cf44da13958a8ce3c1c4c05406a6fc58b6cdcb1706b8f439,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9rq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-dfvvs_openstack(7ff99534-a94e-4306-ae3b-2381a0b8d029): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 07:06:31 crc kubenswrapper[5058]: E1014 07:06:31.550113 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-dfvvs" podUID="7ff99534-a94e-4306-ae3b-2381a0b8d029" Oct 14 07:06:31 crc kubenswrapper[5058]: I1014 07:06:31.699216 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rtt7h" Oct 14 07:06:31 crc kubenswrapper[5058]: I1014 07:06:31.705312 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qmtsb" Oct 14 07:06:31 crc kubenswrapper[5058]: I1014 07:06:31.807581 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz6hb\" (UniqueName: \"kubernetes.io/projected/87d4c8ac-b4bc-440a-bb9d-3721475b9fae-kube-api-access-dz6hb\") pod \"87d4c8ac-b4bc-440a-bb9d-3721475b9fae\" (UID: \"87d4c8ac-b4bc-440a-bb9d-3721475b9fae\") " Oct 14 07:06:31 crc kubenswrapper[5058]: I1014 07:06:31.807731 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px66q\" (UniqueName: \"kubernetes.io/projected/bab845f3-015e-4520-b4cf-56394290c253-kube-api-access-px66q\") pod \"bab845f3-015e-4520-b4cf-56394290c253\" (UID: \"bab845f3-015e-4520-b4cf-56394290c253\") " Oct 14 07:06:31 crc kubenswrapper[5058]: I1014 07:06:31.813005 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d4c8ac-b4bc-440a-bb9d-3721475b9fae-kube-api-access-dz6hb" (OuterVolumeSpecName: "kube-api-access-dz6hb") pod "87d4c8ac-b4bc-440a-bb9d-3721475b9fae" (UID: "87d4c8ac-b4bc-440a-bb9d-3721475b9fae"). InnerVolumeSpecName "kube-api-access-dz6hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:31 crc kubenswrapper[5058]: I1014 07:06:31.813048 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab845f3-015e-4520-b4cf-56394290c253-kube-api-access-px66q" (OuterVolumeSpecName: "kube-api-access-px66q") pod "bab845f3-015e-4520-b4cf-56394290c253" (UID: "bab845f3-015e-4520-b4cf-56394290c253"). InnerVolumeSpecName "kube-api-access-px66q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:31 crc kubenswrapper[5058]: I1014 07:06:31.910030 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz6hb\" (UniqueName: \"kubernetes.io/projected/87d4c8ac-b4bc-440a-bb9d-3721475b9fae-kube-api-access-dz6hb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:31 crc kubenswrapper[5058]: I1014 07:06:31.910068 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px66q\" (UniqueName: \"kubernetes.io/projected/bab845f3-015e-4520-b4cf-56394290c253-kube-api-access-px66q\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:32 crc kubenswrapper[5058]: I1014 07:06:32.048862 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" event={"ID":"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9","Type":"ContainerStarted","Data":"31884091c1de1ca3725eda2705641674af18bc20436040f8d5545760d1d2cdd3"} Oct 14 07:06:32 crc kubenswrapper[5058]: I1014 07:06:32.048976 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:32 crc kubenswrapper[5058]: I1014 07:06:32.050439 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rtt7h" event={"ID":"87d4c8ac-b4bc-440a-bb9d-3721475b9fae","Type":"ContainerDied","Data":"a68fb2aa8c1ac0a99f02f71906b9cd87315fecaf6ec2f0e22619249ecadf53f9"} Oct 14 07:06:32 crc kubenswrapper[5058]: I1014 07:06:32.050573 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68fb2aa8c1ac0a99f02f71906b9cd87315fecaf6ec2f0e22619249ecadf53f9" Oct 14 07:06:32 crc kubenswrapper[5058]: I1014 07:06:32.050455 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rtt7h" Oct 14 07:06:32 crc kubenswrapper[5058]: I1014 07:06:32.052360 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qmtsb" Oct 14 07:06:32 crc kubenswrapper[5058]: I1014 07:06:32.052416 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qmtsb" event={"ID":"bab845f3-015e-4520-b4cf-56394290c253","Type":"ContainerDied","Data":"86d395e57ec1d5e434544a78d6593cf266ee110c85b60db4d9ad86cc977b1b61"} Oct 14 07:06:32 crc kubenswrapper[5058]: I1014 07:06:32.052501 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d395e57ec1d5e434544a78d6593cf266ee110c85b60db4d9ad86cc977b1b61" Oct 14 07:06:32 crc kubenswrapper[5058]: E1014 07:06:32.053256 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone@sha256:f7302eb8964de699cf44da13958a8ce3c1c4c05406a6fc58b6cdcb1706b8f439\\\"\"" pod="openstack/keystone-db-sync-dfvvs" podUID="7ff99534-a94e-4306-ae3b-2381a0b8d029" Oct 14 07:06:32 crc kubenswrapper[5058]: I1014 07:06:32.072205 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" podStartSLOduration=16.072188926 podStartE2EDuration="16.072188926s" podCreationTimestamp="2025-10-14 07:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:06:32.068369666 +0000 UTC m=+1139.979453542" watchObservedRunningTime="2025-10-14 07:06:32.072188926 +0000 UTC m=+1139.983272732" Oct 14 07:06:33 crc kubenswrapper[5058]: I1014 07:06:33.656497 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:06:33 crc kubenswrapper[5058]: I1014 07:06:33.656917 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:06:33 crc kubenswrapper[5058]: I1014 07:06:33.656990 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:06:33 crc kubenswrapper[5058]: I1014 07:06:33.657985 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11c4b957e32f21626508faf45924a31d54a73d0ce8b7652f6bc10e3c25dd0778"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:06:33 crc kubenswrapper[5058]: I1014 07:06:33.658066 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://11c4b957e32f21626508faf45924a31d54a73d0ce8b7652f6bc10e3c25dd0778" gracePeriod=600 Oct 14 07:06:34 crc kubenswrapper[5058]: I1014 07:06:34.080317 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="11c4b957e32f21626508faf45924a31d54a73d0ce8b7652f6bc10e3c25dd0778" exitCode=0 Oct 14 07:06:34 crc kubenswrapper[5058]: I1014 07:06:34.080413 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"11c4b957e32f21626508faf45924a31d54a73d0ce8b7652f6bc10e3c25dd0778"} Oct 14 07:06:34 crc kubenswrapper[5058]: I1014 07:06:34.080679 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"f6ea463aea20323602cc89ea6f1fb50e04214ddd78d3d4678e480df5c73c16fc"} Oct 14 07:06:34 crc kubenswrapper[5058]: I1014 07:06:34.080721 5058 scope.go:117] "RemoveContainer" containerID="a65ec9f6646d74e8b5c645d5e10f222dfaf2fbd7a5ae9280134ea9c9569464ad" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.183013 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0800-account-create-k7fx9"] Oct 14 07:06:35 crc kubenswrapper[5058]: E1014 07:06:35.183700 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5df098-55c4-4c1a-8872-883e1f9c3929" containerName="mariadb-database-create" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.183715 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5df098-55c4-4c1a-8872-883e1f9c3929" containerName="mariadb-database-create" Oct 14 07:06:35 crc kubenswrapper[5058]: E1014 07:06:35.183746 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" containerName="ovn-config" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.183754 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" containerName="ovn-config" Oct 14 07:06:35 crc kubenswrapper[5058]: E1014 07:06:35.183766 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab845f3-015e-4520-b4cf-56394290c253" containerName="mariadb-database-create" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.183774 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab845f3-015e-4520-b4cf-56394290c253" containerName="mariadb-database-create" Oct 14 07:06:35 crc kubenswrapper[5058]: E1014 07:06:35.183826 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d4c8ac-b4bc-440a-bb9d-3721475b9fae" containerName="mariadb-database-create" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.183840 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d4c8ac-b4bc-440a-bb9d-3721475b9fae" containerName="mariadb-database-create" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.184100 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa893d2c-eb2f-4e9f-aba8-4767b1c9d502" containerName="ovn-config" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.184115 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d4c8ac-b4bc-440a-bb9d-3721475b9fae" containerName="mariadb-database-create" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.184128 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab845f3-015e-4520-b4cf-56394290c253" containerName="mariadb-database-create" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.184157 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c5df098-55c4-4c1a-8872-883e1f9c3929" containerName="mariadb-database-create" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.184785 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0800-account-create-k7fx9" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.187776 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.217977 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0800-account-create-k7fx9"] Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.267602 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmwl\" (UniqueName: \"kubernetes.io/projected/5111efb2-a6ab-4523-a72d-132bf5d965b5-kube-api-access-lfmwl\") pod \"cinder-0800-account-create-k7fx9\" (UID: \"5111efb2-a6ab-4523-a72d-132bf5d965b5\") " pod="openstack/cinder-0800-account-create-k7fx9" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.369618 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfmwl\" (UniqueName: \"kubernetes.io/projected/5111efb2-a6ab-4523-a72d-132bf5d965b5-kube-api-access-lfmwl\") pod \"cinder-0800-account-create-k7fx9\" (UID: \"5111efb2-a6ab-4523-a72d-132bf5d965b5\") " pod="openstack/cinder-0800-account-create-k7fx9" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.391077 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfmwl\" (UniqueName: \"kubernetes.io/projected/5111efb2-a6ab-4523-a72d-132bf5d965b5-kube-api-access-lfmwl\") pod \"cinder-0800-account-create-k7fx9\" (UID: \"5111efb2-a6ab-4523-a72d-132bf5d965b5\") " pod="openstack/cinder-0800-account-create-k7fx9" Oct 14 07:06:35 crc kubenswrapper[5058]: I1014 07:06:35.520409 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0800-account-create-k7fx9" Oct 14 07:06:36 crc kubenswrapper[5058]: I1014 07:06:36.069019 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0800-account-create-k7fx9"] Oct 14 07:06:36 crc kubenswrapper[5058]: I1014 07:06:36.107918 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0800-account-create-k7fx9" event={"ID":"5111efb2-a6ab-4523-a72d-132bf5d965b5","Type":"ContainerStarted","Data":"22d412c7f47d2e3246b8fe89013c627a9b35ab57d181ce7ab40cbc261fa4fa27"} Oct 14 07:06:36 crc kubenswrapper[5058]: I1014 07:06:36.529023 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:06:36 crc kubenswrapper[5058]: I1014 07:06:36.620209 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c757dd68f-q9w6l"] Oct 14 07:06:36 crc kubenswrapper[5058]: I1014 07:06:36.620593 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" podUID="05a58868-3565-4a25-b847-9422039abe76" containerName="dnsmasq-dns" containerID="cri-o://b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98" gracePeriod=10 Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.067783 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.116616 5058 generic.go:334] "Generic (PLEG): container finished" podID="5111efb2-a6ab-4523-a72d-132bf5d965b5" containerID="3bf89c61c5eddf7a8e956e671d562dd7749f03ce83b3c01ad37a890b11967d16" exitCode=0 Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.116655 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0800-account-create-k7fx9" event={"ID":"5111efb2-a6ab-4523-a72d-132bf5d965b5","Type":"ContainerDied","Data":"3bf89c61c5eddf7a8e956e671d562dd7749f03ce83b3c01ad37a890b11967d16"} Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.118943 5058 generic.go:334] "Generic (PLEG): container finished" podID="05a58868-3565-4a25-b847-9422039abe76" containerID="b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98" exitCode=0 Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.118979 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" event={"ID":"05a58868-3565-4a25-b847-9422039abe76","Type":"ContainerDied","Data":"b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98"} Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.119004 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" event={"ID":"05a58868-3565-4a25-b847-9422039abe76","Type":"ContainerDied","Data":"ebc45d9d98c95e9e7af8ad8f632175915a662310074be27c1b33a0325680a785"} Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.119025 5058 scope.go:117] "RemoveContainer" containerID="b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.119032 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c757dd68f-q9w6l" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.146100 5058 scope.go:117] "RemoveContainer" containerID="ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.161849 5058 scope.go:117] "RemoveContainer" containerID="b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98" Oct 14 07:06:37 crc kubenswrapper[5058]: E1014 07:06:37.162246 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98\": container with ID starting with b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98 not found: ID does not exist" containerID="b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.162281 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98"} err="failed to get container status \"b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98\": rpc error: code = NotFound desc = could not find container \"b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98\": container with ID starting with b2944607d7ca0ea8b90aba813c01ad38656c6323c3c981538aa0da01af13ff98 not found: ID does not exist" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.162305 5058 scope.go:117] "RemoveContainer" containerID="ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1" Oct 14 07:06:37 crc kubenswrapper[5058]: E1014 07:06:37.162556 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1\": container with ID starting with ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1 not found: ID does not exist" containerID="ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.162626 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1"} err="failed to get container status \"ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1\": rpc error: code = NotFound desc = could not find container \"ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1\": container with ID starting with ed380c08449ca890a3424a77964827415afd1445816bfadb4fa3b476f6f689c1 not found: ID does not exist" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.197060 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-sb\") pod \"05a58868-3565-4a25-b847-9422039abe76\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.197132 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggnb4\" (UniqueName: \"kubernetes.io/projected/05a58868-3565-4a25-b847-9422039abe76-kube-api-access-ggnb4\") pod \"05a58868-3565-4a25-b847-9422039abe76\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.197287 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-dns-svc\") pod \"05a58868-3565-4a25-b847-9422039abe76\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.197329 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-config\") pod \"05a58868-3565-4a25-b847-9422039abe76\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.197373 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-nb\") pod \"05a58868-3565-4a25-b847-9422039abe76\" (UID: \"05a58868-3565-4a25-b847-9422039abe76\") " Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.204033 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a58868-3565-4a25-b847-9422039abe76-kube-api-access-ggnb4" (OuterVolumeSpecName: "kube-api-access-ggnb4") pod "05a58868-3565-4a25-b847-9422039abe76" (UID: "05a58868-3565-4a25-b847-9422039abe76"). InnerVolumeSpecName "kube-api-access-ggnb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.240586 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-config" (OuterVolumeSpecName: "config") pod "05a58868-3565-4a25-b847-9422039abe76" (UID: "05a58868-3565-4a25-b847-9422039abe76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.242824 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05a58868-3565-4a25-b847-9422039abe76" (UID: "05a58868-3565-4a25-b847-9422039abe76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.247783 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05a58868-3565-4a25-b847-9422039abe76" (UID: "05a58868-3565-4a25-b847-9422039abe76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.250577 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05a58868-3565-4a25-b847-9422039abe76" (UID: "05a58868-3565-4a25-b847-9422039abe76"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.300254 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.300299 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.300309 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.300321 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05a58868-3565-4a25-b847-9422039abe76-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.300333 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggnb4\" (UniqueName: \"kubernetes.io/projected/05a58868-3565-4a25-b847-9422039abe76-kube-api-access-ggnb4\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.446373 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c757dd68f-q9w6l"] Oct 14 07:06:37 crc kubenswrapper[5058]: I1014 07:06:37.453818 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c757dd68f-q9w6l"] Oct 14 07:06:38 crc kubenswrapper[5058]: I1014 07:06:38.393364 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0800-account-create-k7fx9" Oct 14 07:06:38 crc kubenswrapper[5058]: I1014 07:06:38.525344 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfmwl\" (UniqueName: \"kubernetes.io/projected/5111efb2-a6ab-4523-a72d-132bf5d965b5-kube-api-access-lfmwl\") pod \"5111efb2-a6ab-4523-a72d-132bf5d965b5\" (UID: \"5111efb2-a6ab-4523-a72d-132bf5d965b5\") " Oct 14 07:06:38 crc kubenswrapper[5058]: I1014 07:06:38.529899 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5111efb2-a6ab-4523-a72d-132bf5d965b5-kube-api-access-lfmwl" (OuterVolumeSpecName: "kube-api-access-lfmwl") pod "5111efb2-a6ab-4523-a72d-132bf5d965b5" (UID: "5111efb2-a6ab-4523-a72d-132bf5d965b5"). InnerVolumeSpecName "kube-api-access-lfmwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:38 crc kubenswrapper[5058]: I1014 07:06:38.627592 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfmwl\" (UniqueName: \"kubernetes.io/projected/5111efb2-a6ab-4523-a72d-132bf5d965b5-kube-api-access-lfmwl\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:38 crc kubenswrapper[5058]: I1014 07:06:38.811130 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05a58868-3565-4a25-b847-9422039abe76" path="/var/lib/kubelet/pods/05a58868-3565-4a25-b847-9422039abe76/volumes" Oct 14 07:06:39 crc kubenswrapper[5058]: I1014 07:06:39.138731 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0800-account-create-k7fx9" event={"ID":"5111efb2-a6ab-4523-a72d-132bf5d965b5","Type":"ContainerDied","Data":"22d412c7f47d2e3246b8fe89013c627a9b35ab57d181ce7ab40cbc261fa4fa27"} Oct 14 07:06:39 crc kubenswrapper[5058]: I1014 07:06:39.138777 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22d412c7f47d2e3246b8fe89013c627a9b35ab57d181ce7ab40cbc261fa4fa27" Oct 14 07:06:39 crc kubenswrapper[5058]: I1014 07:06:39.138859 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0800-account-create-k7fx9" Oct 14 07:06:44 crc kubenswrapper[5058]: I1014 07:06:44.206880 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f2gf4" event={"ID":"3232b896-82b9-4d80-b74a-2e860e05ffbc","Type":"ContainerStarted","Data":"2207b9755cce7061a8c0d946069da9de3f78f8089645b8f4c379126254f80a99"} Oct 14 07:06:44 crc kubenswrapper[5058]: I1014 07:06:44.224441 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-f2gf4" podStartSLOduration=2.845499253 podStartE2EDuration="36.224419872s" podCreationTimestamp="2025-10-14 07:06:08 +0000 UTC" firstStartedPulling="2025-10-14 07:06:09.964783227 +0000 UTC m=+1117.875867033" lastFinishedPulling="2025-10-14 07:06:43.343703846 +0000 UTC m=+1151.254787652" observedRunningTime="2025-10-14 07:06:44.223187687 +0000 UTC m=+1152.134271523" watchObservedRunningTime="2025-10-14 07:06:44.224419872 +0000 UTC m=+1152.135503708" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.154104 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b71d-account-create-krknh"] Oct 14 07:06:45 crc kubenswrapper[5058]: E1014 07:06:45.155018 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a58868-3565-4a25-b847-9422039abe76" containerName="dnsmasq-dns" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.155060 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a58868-3565-4a25-b847-9422039abe76" containerName="dnsmasq-dns" Oct 14 07:06:45 crc kubenswrapper[5058]: E1014 07:06:45.155086 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a58868-3565-4a25-b847-9422039abe76" containerName="init" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.155159 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a58868-3565-4a25-b847-9422039abe76" containerName="init" Oct 14 07:06:45 crc kubenswrapper[5058]: E1014 07:06:45.155188 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5111efb2-a6ab-4523-a72d-132bf5d965b5" containerName="mariadb-account-create" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.155201 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5111efb2-a6ab-4523-a72d-132bf5d965b5" containerName="mariadb-account-create" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.155554 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a58868-3565-4a25-b847-9422039abe76" containerName="dnsmasq-dns" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.155604 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5111efb2-a6ab-4523-a72d-132bf5d965b5" containerName="mariadb-account-create" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.156630 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b71d-account-create-krknh" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.161712 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.164209 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b71d-account-create-krknh"] Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.269675 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jh9b\" (UniqueName: \"kubernetes.io/projected/21a76d6d-4463-4f4f-afdc-dde452bd3520-kube-api-access-4jh9b\") pod \"barbican-b71d-account-create-krknh\" (UID: \"21a76d6d-4463-4f4f-afdc-dde452bd3520\") " pod="openstack/barbican-b71d-account-create-krknh" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.356052 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9e44-account-create-7nwrj"] Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.357104 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e44-account-create-7nwrj" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.361210 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.372097 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jh9b\" (UniqueName: \"kubernetes.io/projected/21a76d6d-4463-4f4f-afdc-dde452bd3520-kube-api-access-4jh9b\") pod \"barbican-b71d-account-create-krknh\" (UID: \"21a76d6d-4463-4f4f-afdc-dde452bd3520\") " pod="openstack/barbican-b71d-account-create-krknh" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.372768 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9e44-account-create-7nwrj"] Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.399813 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jh9b\" (UniqueName: \"kubernetes.io/projected/21a76d6d-4463-4f4f-afdc-dde452bd3520-kube-api-access-4jh9b\") pod \"barbican-b71d-account-create-krknh\" (UID: \"21a76d6d-4463-4f4f-afdc-dde452bd3520\") " pod="openstack/barbican-b71d-account-create-krknh" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.473137 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7x8\" (UniqueName: \"kubernetes.io/projected/70436572-037b-45af-8016-a6e7098110ba-kube-api-access-tb7x8\") pod \"neutron-9e44-account-create-7nwrj\" (UID: \"70436572-037b-45af-8016-a6e7098110ba\") " pod="openstack/neutron-9e44-account-create-7nwrj" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.482562 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b71d-account-create-krknh" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.575152 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb7x8\" (UniqueName: \"kubernetes.io/projected/70436572-037b-45af-8016-a6e7098110ba-kube-api-access-tb7x8\") pod \"neutron-9e44-account-create-7nwrj\" (UID: \"70436572-037b-45af-8016-a6e7098110ba\") " pod="openstack/neutron-9e44-account-create-7nwrj" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.594500 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb7x8\" (UniqueName: \"kubernetes.io/projected/70436572-037b-45af-8016-a6e7098110ba-kube-api-access-tb7x8\") pod \"neutron-9e44-account-create-7nwrj\" (UID: \"70436572-037b-45af-8016-a6e7098110ba\") " pod="openstack/neutron-9e44-account-create-7nwrj" Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.680377 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e44-account-create-7nwrj" Oct 14 07:06:45 crc kubenswrapper[5058]: W1014 07:06:45.886723 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21a76d6d_4463_4f4f_afdc_dde452bd3520.slice/crio-0f4e139d21351597fea25ba430b6726fa10bda05ad76d131490347db903348cc WatchSource:0}: Error finding container 0f4e139d21351597fea25ba430b6726fa10bda05ad76d131490347db903348cc: Status 404 returned error can't find the container with id 0f4e139d21351597fea25ba430b6726fa10bda05ad76d131490347db903348cc Oct 14 07:06:45 crc kubenswrapper[5058]: I1014 07:06:45.886972 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b71d-account-create-krknh"] Oct 14 07:06:46 crc kubenswrapper[5058]: I1014 07:06:46.072302 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9e44-account-create-7nwrj"] Oct 14 07:06:46 crc kubenswrapper[5058]: W1014 07:06:46.077836 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70436572_037b_45af_8016_a6e7098110ba.slice/crio-208d8ac65b08710d596f7d986170f3532c37741a82666af2fa3423be9726c7d8 WatchSource:0}: Error finding container 208d8ac65b08710d596f7d986170f3532c37741a82666af2fa3423be9726c7d8: Status 404 returned error can't find the container with id 208d8ac65b08710d596f7d986170f3532c37741a82666af2fa3423be9726c7d8 Oct 14 07:06:46 crc kubenswrapper[5058]: I1014 07:06:46.225447 5058 generic.go:334] "Generic (PLEG): container finished" podID="21a76d6d-4463-4f4f-afdc-dde452bd3520" containerID="13f6ef3c2ee8fc26ccf62a297d8ebb59220986ea408bbf21d781dd0e3ecb304e" exitCode=0 Oct 14 07:06:46 crc kubenswrapper[5058]: I1014 07:06:46.225522 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b71d-account-create-krknh" event={"ID":"21a76d6d-4463-4f4f-afdc-dde452bd3520","Type":"ContainerDied","Data":"13f6ef3c2ee8fc26ccf62a297d8ebb59220986ea408bbf21d781dd0e3ecb304e"} Oct 14 07:06:46 crc kubenswrapper[5058]: I1014 07:06:46.225551 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b71d-account-create-krknh" event={"ID":"21a76d6d-4463-4f4f-afdc-dde452bd3520","Type":"ContainerStarted","Data":"0f4e139d21351597fea25ba430b6726fa10bda05ad76d131490347db903348cc"} Oct 14 07:06:46 crc kubenswrapper[5058]: I1014 07:06:46.227974 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9e44-account-create-7nwrj" event={"ID":"70436572-037b-45af-8016-a6e7098110ba","Type":"ContainerStarted","Data":"208d8ac65b08710d596f7d986170f3532c37741a82666af2fa3423be9726c7d8"} Oct 14 07:06:46 crc kubenswrapper[5058]: I1014 07:06:46.266620 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9e44-account-create-7nwrj" podStartSLOduration=1.266602092 podStartE2EDuration="1.266602092s" podCreationTimestamp="2025-10-14 07:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:06:46.262879576 +0000 UTC m=+1154.173963382" watchObservedRunningTime="2025-10-14 07:06:46.266602092 +0000 UTC m=+1154.177685908" Oct 14 07:06:47 crc kubenswrapper[5058]: I1014 07:06:47.237809 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dfvvs" event={"ID":"7ff99534-a94e-4306-ae3b-2381a0b8d029","Type":"ContainerStarted","Data":"af94d219d23122103b5302c378076785e59888115f6d1003e054e31c087d0d0c"} Oct 14 07:06:47 crc kubenswrapper[5058]: I1014 07:06:47.239628 5058 generic.go:334] "Generic (PLEG): container finished" podID="70436572-037b-45af-8016-a6e7098110ba" containerID="4320f5f8bacdb0392539f3db341e4f1d4087365766c4007bfa9751bc8b9a17f9" exitCode=0 Oct 14 07:06:47 crc kubenswrapper[5058]: I1014 07:06:47.239689 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9e44-account-create-7nwrj" event={"ID":"70436572-037b-45af-8016-a6e7098110ba","Type":"ContainerDied","Data":"4320f5f8bacdb0392539f3db341e4f1d4087365766c4007bfa9751bc8b9a17f9"} Oct 14 07:06:47 crc kubenswrapper[5058]: I1014 07:06:47.272698 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-dfvvs" podStartSLOduration=2.42068879 podStartE2EDuration="32.272678808s" podCreationTimestamp="2025-10-14 07:06:15 +0000 UTC" firstStartedPulling="2025-10-14 07:06:16.482481928 +0000 UTC m=+1124.393565734" lastFinishedPulling="2025-10-14 07:06:46.334471946 +0000 UTC m=+1154.245555752" observedRunningTime="2025-10-14 07:06:47.261642842 +0000 UTC m=+1155.172726648" watchObservedRunningTime="2025-10-14 07:06:47.272678808 +0000 UTC m=+1155.183762614" Oct 14 07:06:47 crc kubenswrapper[5058]: I1014 07:06:47.591737 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b71d-account-create-krknh" Oct 14 07:06:47 crc kubenswrapper[5058]: I1014 07:06:47.713204 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jh9b\" (UniqueName: \"kubernetes.io/projected/21a76d6d-4463-4f4f-afdc-dde452bd3520-kube-api-access-4jh9b\") pod \"21a76d6d-4463-4f4f-afdc-dde452bd3520\" (UID: \"21a76d6d-4463-4f4f-afdc-dde452bd3520\") " Oct 14 07:06:47 crc kubenswrapper[5058]: I1014 07:06:47.722238 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a76d6d-4463-4f4f-afdc-dde452bd3520-kube-api-access-4jh9b" (OuterVolumeSpecName: "kube-api-access-4jh9b") pod "21a76d6d-4463-4f4f-afdc-dde452bd3520" (UID: "21a76d6d-4463-4f4f-afdc-dde452bd3520"). InnerVolumeSpecName "kube-api-access-4jh9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:47 crc kubenswrapper[5058]: I1014 07:06:47.815425 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jh9b\" (UniqueName: \"kubernetes.io/projected/21a76d6d-4463-4f4f-afdc-dde452bd3520-kube-api-access-4jh9b\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:48 crc kubenswrapper[5058]: I1014 07:06:48.249247 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b71d-account-create-krknh" Oct 14 07:06:48 crc kubenswrapper[5058]: I1014 07:06:48.250029 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b71d-account-create-krknh" event={"ID":"21a76d6d-4463-4f4f-afdc-dde452bd3520","Type":"ContainerDied","Data":"0f4e139d21351597fea25ba430b6726fa10bda05ad76d131490347db903348cc"} Oct 14 07:06:48 crc kubenswrapper[5058]: I1014 07:06:48.250126 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f4e139d21351597fea25ba430b6726fa10bda05ad76d131490347db903348cc" Oct 14 07:06:48 crc kubenswrapper[5058]: I1014 07:06:48.605165 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e44-account-create-7nwrj" Oct 14 07:06:48 crc kubenswrapper[5058]: I1014 07:06:48.746577 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb7x8\" (UniqueName: \"kubernetes.io/projected/70436572-037b-45af-8016-a6e7098110ba-kube-api-access-tb7x8\") pod \"70436572-037b-45af-8016-a6e7098110ba\" (UID: \"70436572-037b-45af-8016-a6e7098110ba\") " Oct 14 07:06:48 crc kubenswrapper[5058]: I1014 07:06:48.750590 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70436572-037b-45af-8016-a6e7098110ba-kube-api-access-tb7x8" (OuterVolumeSpecName: "kube-api-access-tb7x8") pod "70436572-037b-45af-8016-a6e7098110ba" (UID: "70436572-037b-45af-8016-a6e7098110ba"). InnerVolumeSpecName "kube-api-access-tb7x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:48 crc kubenswrapper[5058]: I1014 07:06:48.849110 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb7x8\" (UniqueName: \"kubernetes.io/projected/70436572-037b-45af-8016-a6e7098110ba-kube-api-access-tb7x8\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:49 crc kubenswrapper[5058]: I1014 07:06:49.261534 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9e44-account-create-7nwrj" event={"ID":"70436572-037b-45af-8016-a6e7098110ba","Type":"ContainerDied","Data":"208d8ac65b08710d596f7d986170f3532c37741a82666af2fa3423be9726c7d8"} Oct 14 07:06:49 crc kubenswrapper[5058]: I1014 07:06:49.262019 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208d8ac65b08710d596f7d986170f3532c37741a82666af2fa3423be9726c7d8" Oct 14 07:06:49 crc kubenswrapper[5058]: I1014 07:06:49.261654 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e44-account-create-7nwrj" Oct 14 07:06:50 crc kubenswrapper[5058]: I1014 07:06:50.273372 5058 generic.go:334] "Generic (PLEG): container finished" podID="7ff99534-a94e-4306-ae3b-2381a0b8d029" containerID="af94d219d23122103b5302c378076785e59888115f6d1003e054e31c087d0d0c" exitCode=0 Oct 14 07:06:50 crc kubenswrapper[5058]: I1014 07:06:50.273452 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dfvvs" event={"ID":"7ff99534-a94e-4306-ae3b-2381a0b8d029","Type":"ContainerDied","Data":"af94d219d23122103b5302c378076785e59888115f6d1003e054e31c087d0d0c"} Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.297363 5058 generic.go:334] "Generic (PLEG): container finished" podID="3232b896-82b9-4d80-b74a-2e860e05ffbc" containerID="2207b9755cce7061a8c0d946069da9de3f78f8089645b8f4c379126254f80a99" exitCode=0 Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.297434 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f2gf4" event={"ID":"3232b896-82b9-4d80-b74a-2e860e05ffbc","Type":"ContainerDied","Data":"2207b9755cce7061a8c0d946069da9de3f78f8089645b8f4c379126254f80a99"} Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.701132 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.802451 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-combined-ca-bundle\") pod \"7ff99534-a94e-4306-ae3b-2381a0b8d029\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.802655 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9rq4\" (UniqueName: \"kubernetes.io/projected/7ff99534-a94e-4306-ae3b-2381a0b8d029-kube-api-access-k9rq4\") pod \"7ff99534-a94e-4306-ae3b-2381a0b8d029\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.802836 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-config-data\") pod \"7ff99534-a94e-4306-ae3b-2381a0b8d029\" (UID: \"7ff99534-a94e-4306-ae3b-2381a0b8d029\") " Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.809361 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff99534-a94e-4306-ae3b-2381a0b8d029-kube-api-access-k9rq4" (OuterVolumeSpecName: "kube-api-access-k9rq4") pod "7ff99534-a94e-4306-ae3b-2381a0b8d029" (UID: "7ff99534-a94e-4306-ae3b-2381a0b8d029"). InnerVolumeSpecName "kube-api-access-k9rq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.837993 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ff99534-a94e-4306-ae3b-2381a0b8d029" (UID: "7ff99534-a94e-4306-ae3b-2381a0b8d029"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.868066 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-config-data" (OuterVolumeSpecName: "config-data") pod "7ff99534-a94e-4306-ae3b-2381a0b8d029" (UID: "7ff99534-a94e-4306-ae3b-2381a0b8d029"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.905880 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.906028 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9rq4\" (UniqueName: \"kubernetes.io/projected/7ff99534-a94e-4306-ae3b-2381a0b8d029-kube-api-access-k9rq4\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:51 crc kubenswrapper[5058]: I1014 07:06:51.906137 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff99534-a94e-4306-ae3b-2381a0b8d029-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.311653 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dfvvs" event={"ID":"7ff99534-a94e-4306-ae3b-2381a0b8d029","Type":"ContainerDied","Data":"fa625ebc874e0d731c4eae243d36eeae58cca83cf32f2b08d6dc004a38bd5280"} Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.312254 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa625ebc874e0d731c4eae243d36eeae58cca83cf32f2b08d6dc004a38bd5280" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.311677 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dfvvs" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.593227 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-m4pjj"] Oct 14 07:06:52 crc kubenswrapper[5058]: E1014 07:06:52.593562 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a76d6d-4463-4f4f-afdc-dde452bd3520" containerName="mariadb-account-create" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.593574 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a76d6d-4463-4f4f-afdc-dde452bd3520" containerName="mariadb-account-create" Oct 14 07:06:52 crc kubenswrapper[5058]: E1014 07:06:52.593582 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff99534-a94e-4306-ae3b-2381a0b8d029" containerName="keystone-db-sync" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.593588 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff99534-a94e-4306-ae3b-2381a0b8d029" containerName="keystone-db-sync" Oct 14 07:06:52 crc kubenswrapper[5058]: E1014 07:06:52.593620 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70436572-037b-45af-8016-a6e7098110ba" containerName="mariadb-account-create" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.593626 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="70436572-037b-45af-8016-a6e7098110ba" containerName="mariadb-account-create" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.593774 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a76d6d-4463-4f4f-afdc-dde452bd3520" containerName="mariadb-account-create" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.593789 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="70436572-037b-45af-8016-a6e7098110ba" containerName="mariadb-account-create" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.593808 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff99534-a94e-4306-ae3b-2381a0b8d029" containerName="keystone-db-sync" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.594324 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.597225 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.597240 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xv7zn" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.597350 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.597420 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.627863 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m4pjj"] Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.648895 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b6dcd555f-ghldb"] Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.650318 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.671254 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6dcd555f-ghldb"] Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.726691 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-config\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.726772 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-swift-storage-0\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.726832 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lthnz\" (UniqueName: \"kubernetes.io/projected/f383743f-e21b-4e88-a11a-8555f0254364-kube-api-access-lthnz\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.726878 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-svc\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.727103 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-credential-keys\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.727199 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-scripts\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.727382 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-fernet-keys\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.727422 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbth7\" (UniqueName: \"kubernetes.io/projected/27f7a9a5-d047-422b-9924-dcc3d434da12-kube-api-access-mbth7\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.727466 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-config-data\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.727507 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-sb\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.727541 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-combined-ca-bundle\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.727602 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-nb\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.779480 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8n5f9"] Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.780650 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.784401 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.785142 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.785293 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jd7h2" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.816849 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8n5f9"] Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.829865 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-scripts\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.829927 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-config-data\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.829953 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5bqp\" (UniqueName: \"kubernetes.io/projected/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-kube-api-access-m5bqp\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830001 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-scripts\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830162 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-fernet-keys\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830230 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbth7\" (UniqueName: \"kubernetes.io/projected/27f7a9a5-d047-422b-9924-dcc3d434da12-kube-api-access-mbth7\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830256 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-config-data\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830274 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-sb\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830292 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-combined-ca-bundle\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830312 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-db-sync-config-data\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830327 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-combined-ca-bundle\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830351 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-nb\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830594 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-config\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830628 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-etc-machine-id\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830682 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-swift-storage-0\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830712 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lthnz\" (UniqueName: \"kubernetes.io/projected/f383743f-e21b-4e88-a11a-8555f0254364-kube-api-access-lthnz\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830736 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-svc\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.830868 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-credential-keys\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.841740 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-config\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.843236 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-combined-ca-bundle\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.845010 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-swift-storage-0\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.848346 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-nb\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.849684 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-svc\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.850256 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-credential-keys\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.850385 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-sb\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.850509 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-config-data\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.851167 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-scripts\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.855509 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-fernet-keys\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.901952 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbth7\" (UniqueName: \"kubernetes.io/projected/27f7a9a5-d047-422b-9924-dcc3d434da12-kube-api-access-mbth7\") pod \"keystone-bootstrap-m4pjj\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.907514 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cxv76"] Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.908641 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.911124 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.911331 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fhksz" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.913532 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cxv76"] Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.915090 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.916707 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lthnz\" (UniqueName: \"kubernetes.io/projected/f383743f-e21b-4e88-a11a-8555f0254364-kube-api-access-lthnz\") pod \"dnsmasq-dns-b6dcd555f-ghldb\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.932765 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-config-data\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.932822 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5bqp\" (UniqueName: \"kubernetes.io/projected/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-kube-api-access-m5bqp\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.932845 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-scripts\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.932895 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-db-sync-config-data\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.932914 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-combined-ca-bundle\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.932963 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-etc-machine-id\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.942691 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-scripts\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.943101 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-etc-machine-id\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.944526 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.955530 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-config-data\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.956497 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-combined-ca-bundle\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.964876 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-db-sync-config-data\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.983916 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5bqp\" (UniqueName: \"kubernetes.io/projected/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-kube-api-access-m5bqp\") pod \"cinder-db-sync-8n5f9\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:52 crc kubenswrapper[5058]: I1014 07:06:52.986893 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6dcd555f-ghldb"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:52.997788 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.020374 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7fblc"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.020581 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.022332 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.027837 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.035728 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-config\") pod \"neutron-db-sync-cxv76\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.035761 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-combined-ca-bundle\") pod \"neutron-db-sync-cxv76\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.035834 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjn62\" (UniqueName: \"kubernetes.io/projected/15c0ea98-8f40-4281-b309-4c3994b849ba-kube-api-access-kjn62\") pod \"neutron-db-sync-cxv76\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.036227 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.037961 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cxc4b" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.055923 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:06:53 crc kubenswrapper[5058]: E1014 07:06:53.056297 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3232b896-82b9-4d80-b74a-2e860e05ffbc" containerName="glance-db-sync" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.056323 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3232b896-82b9-4d80-b74a-2e860e05ffbc" containerName="glance-db-sync" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.056537 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3232b896-82b9-4d80-b74a-2e860e05ffbc" containerName="glance-db-sync" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.058122 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.062829 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.062998 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.090740 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.119883 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7fblc"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.129232 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jkqz7"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.134033 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.136704 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9tjb\" (UniqueName: \"kubernetes.io/projected/3232b896-82b9-4d80-b74a-2e860e05ffbc-kube-api-access-k9tjb\") pod \"3232b896-82b9-4d80-b74a-2e860e05ffbc\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.136928 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-config-data\") pod \"3232b896-82b9-4d80-b74a-2e860e05ffbc\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137022 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-combined-ca-bundle\") pod \"3232b896-82b9-4d80-b74a-2e860e05ffbc\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137048 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-db-sync-config-data\") pod \"3232b896-82b9-4d80-b74a-2e860e05ffbc\" (UID: \"3232b896-82b9-4d80-b74a-2e860e05ffbc\") " Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137252 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjn62\" (UniqueName: \"kubernetes.io/projected/15c0ea98-8f40-4281-b309-4c3994b849ba-kube-api-access-kjn62\") pod \"neutron-db-sync-cxv76\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137297 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-log-httpd\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137315 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137352 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-combined-ca-bundle\") pod \"barbican-db-sync-7fblc\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137396 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137432 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-scripts\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137454 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjt4f\" (UniqueName: \"kubernetes.io/projected/4f5a513c-30a3-4221-a451-ec25eeb46c4e-kube-api-access-fjt4f\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137480 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-config\") pod \"neutron-db-sync-cxv76\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137516 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-combined-ca-bundle\") pod \"neutron-db-sync-cxv76\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137535 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rg8\" (UniqueName: \"kubernetes.io/projected/0298834e-f248-4e99-8f2d-7807c9421143-kube-api-access-w9rg8\") pod \"barbican-db-sync-7fblc\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137581 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-run-httpd\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137598 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-config-data\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.137626 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-db-sync-config-data\") pod \"barbican-db-sync-7fblc\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.142466 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jkqz7"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.143264 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.143465 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.143473 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-j9xcl" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.145276 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3232b896-82b9-4d80-b74a-2e860e05ffbc" (UID: "3232b896-82b9-4d80-b74a-2e860e05ffbc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.153746 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-combined-ca-bundle\") pod \"neutron-db-sync-cxv76\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.156995 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-config\") pod \"neutron-db-sync-cxv76\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.157380 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3232b896-82b9-4d80-b74a-2e860e05ffbc-kube-api-access-k9tjb" (OuterVolumeSpecName: "kube-api-access-k9tjb") pod "3232b896-82b9-4d80-b74a-2e860e05ffbc" (UID: "3232b896-82b9-4d80-b74a-2e860e05ffbc"). InnerVolumeSpecName "kube-api-access-k9tjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.159541 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c75bdf7bf-txqvn"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.161750 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.164272 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjn62\" (UniqueName: \"kubernetes.io/projected/15c0ea98-8f40-4281-b309-4c3994b849ba-kube-api-access-kjn62\") pod \"neutron-db-sync-cxv76\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.168863 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c75bdf7bf-txqvn"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.193024 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3232b896-82b9-4d80-b74a-2e860e05ffbc" (UID: "3232b896-82b9-4d80-b74a-2e860e05ffbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.201535 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-config-data" (OuterVolumeSpecName: "config-data") pod "3232b896-82b9-4d80-b74a-2e860e05ffbc" (UID: "3232b896-82b9-4d80-b74a-2e860e05ffbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240042 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-scripts\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240089 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29jr9\" (UniqueName: \"kubernetes.io/projected/2cc07131-fe8b-4931-9146-b6977ceb175c-kube-api-access-29jr9\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240122 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjt4f\" (UniqueName: \"kubernetes.io/projected/4f5a513c-30a3-4221-a451-ec25eeb46c4e-kube-api-access-fjt4f\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240141 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240165 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240190 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9rg8\" (UniqueName: \"kubernetes.io/projected/0298834e-f248-4e99-8f2d-7807c9421143-kube-api-access-w9rg8\") pod \"barbican-db-sync-7fblc\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240215 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240233 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-config-data\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240250 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-config\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240271 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-svc\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240291 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-scripts\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240308 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-run-httpd\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240327 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wcj\" (UniqueName: \"kubernetes.io/projected/3e446e94-abcb-495b-be50-2100e77d2e4e-kube-api-access-t7wcj\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240358 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-config-data\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240386 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-db-sync-config-data\") pod \"barbican-db-sync-7fblc\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240419 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-log-httpd\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240437 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240464 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e446e94-abcb-495b-be50-2100e77d2e4e-logs\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240480 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-combined-ca-bundle\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240497 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-combined-ca-bundle\") pod \"barbican-db-sync-7fblc\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240520 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240560 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9tjb\" (UniqueName: \"kubernetes.io/projected/3232b896-82b9-4d80-b74a-2e860e05ffbc-kube-api-access-k9tjb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240571 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240580 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.240589 5058 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3232b896-82b9-4d80-b74a-2e860e05ffbc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.242174 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-log-httpd\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.242760 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-run-httpd\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.245947 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.250470 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-combined-ca-bundle\") pod \"barbican-db-sync-7fblc\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.250732 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-config-data\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.255977 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-scripts\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.257252 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-db-sync-config-data\") pod \"barbican-db-sync-7fblc\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.258779 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.259374 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjt4f\" (UniqueName: \"kubernetes.io/projected/4f5a513c-30a3-4221-a451-ec25eeb46c4e-kube-api-access-fjt4f\") pod \"ceilometer-0\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.267436 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9rg8\" (UniqueName: \"kubernetes.io/projected/0298834e-f248-4e99-8f2d-7807c9421143-kube-api-access-w9rg8\") pod \"barbican-db-sync-7fblc\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.322948 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f2gf4" event={"ID":"3232b896-82b9-4d80-b74a-2e860e05ffbc","Type":"ContainerDied","Data":"69cbf5013abd9618ba1973991a239b833c24a482e23e53afaaad63635e1d4765"} Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.322984 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69cbf5013abd9618ba1973991a239b833c24a482e23e53afaaad63635e1d4765" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.323083 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f2gf4" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.342233 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-svc\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.342682 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-scripts\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.342715 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wcj\" (UniqueName: \"kubernetes.io/projected/3e446e94-abcb-495b-be50-2100e77d2e4e-kube-api-access-t7wcj\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.342870 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e446e94-abcb-495b-be50-2100e77d2e4e-logs\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.342919 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-combined-ca-bundle\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.342976 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29jr9\" (UniqueName: \"kubernetes.io/projected/2cc07131-fe8b-4931-9146-b6977ceb175c-kube-api-access-29jr9\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.344749 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-svc\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.344826 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e446e94-abcb-495b-be50-2100e77d2e4e-logs\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.349310 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-scripts\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.349468 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.349525 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.349594 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-combined-ca-bundle\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.349602 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.349702 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-config-data\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.349734 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-config\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.350187 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.350932 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-config\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.351159 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.351213 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.356879 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-config-data\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.371819 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wcj\" (UniqueName: \"kubernetes.io/projected/3e446e94-abcb-495b-be50-2100e77d2e4e-kube-api-access-t7wcj\") pod \"placement-db-sync-jkqz7\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.372279 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cxv76" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.372087 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29jr9\" (UniqueName: \"kubernetes.io/projected/2cc07131-fe8b-4931-9146-b6977ceb175c-kube-api-access-29jr9\") pod \"dnsmasq-dns-5c75bdf7bf-txqvn\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.395253 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7fblc" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.423911 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.472554 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jkqz7" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.499418 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.524538 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m4pjj"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.549876 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6dcd555f-ghldb"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.689689 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c75bdf7bf-txqvn"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.701022 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8n5f9"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.764058 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77f7885f7f-gp8sj"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.765928 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.773904 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f7885f7f-gp8sj"] Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.866322 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-sb\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.866396 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-nb\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.866429 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-swift-storage-0\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.866452 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r622\" (UniqueName: \"kubernetes.io/projected/e42fe5a2-62f8-490d-b873-b682c849cada-kube-api-access-9r622\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.866469 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-config\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.867364 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-svc\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.969334 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-sb\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.969391 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-nb\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.969424 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-swift-storage-0\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.970364 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-swift-storage-0\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.970518 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-sb\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.970599 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-nb\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.970637 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r622\" (UniqueName: \"kubernetes.io/projected/e42fe5a2-62f8-490d-b873-b682c849cada-kube-api-access-9r622\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.970706 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-config\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.974429 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-config\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.976971 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-svc\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.979345 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-svc\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:53 crc kubenswrapper[5058]: I1014 07:06:53.996129 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r622\" (UniqueName: \"kubernetes.io/projected/e42fe5a2-62f8-490d-b873-b682c849cada-kube-api-access-9r622\") pod \"dnsmasq-dns-77f7885f7f-gp8sj\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.065642 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cxv76"] Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.208355 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.210019 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7fblc"] Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.313768 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.328531 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jkqz7"] Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.340235 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c75bdf7bf-txqvn"] Oct 14 07:06:54 crc kubenswrapper[5058]: W1014 07:06:54.359548 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc07131_fe8b_4931_9146_b6977ceb175c.slice/crio-4e167f72e8789acc1a5b67bff132cf52137ab6d3fb13b383059b98082228ed5e WatchSource:0}: Error finding container 4e167f72e8789acc1a5b67bff132cf52137ab6d3fb13b383059b98082228ed5e: Status 404 returned error can't find the container with id 4e167f72e8789acc1a5b67bff132cf52137ab6d3fb13b383059b98082228ed5e Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.364526 5058 generic.go:334] "Generic (PLEG): container finished" podID="f383743f-e21b-4e88-a11a-8555f0254364" containerID="6bd36a392736d72bf170237213fc17858140146728b316cf239a5730c71f5d83" exitCode=0 Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.364706 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" event={"ID":"f383743f-e21b-4e88-a11a-8555f0254364","Type":"ContainerDied","Data":"6bd36a392736d72bf170237213fc17858140146728b316cf239a5730c71f5d83"} Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.364824 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" event={"ID":"f383743f-e21b-4e88-a11a-8555f0254364","Type":"ContainerStarted","Data":"dd845e2ce67bdf3e1f6f9f60dd89cde06544fcb07e47b05bad751b464b22e019"} Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.366478 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerStarted","Data":"2cef869a7234db04b263fa4455919913d82ee38ac53600381600a4e42608a42f"} Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.368370 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cxv76" event={"ID":"15c0ea98-8f40-4281-b309-4c3994b849ba","Type":"ContainerStarted","Data":"b42167829db655ee29d561f092b8f822ca55d825ad8a0a2d947e23d43b26c50a"} Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.368403 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cxv76" event={"ID":"15c0ea98-8f40-4281-b309-4c3994b849ba","Type":"ContainerStarted","Data":"f737b233cb9191d93a8b3cf433dc50a6218d343930ddc33020275f02eb92cb05"} Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.370969 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m4pjj" event={"ID":"27f7a9a5-d047-422b-9924-dcc3d434da12","Type":"ContainerStarted","Data":"0bb836c19aa15735f6bf1496006b97de7fede44e1beaf2c25a807476bc1dca66"} Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.370999 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m4pjj" event={"ID":"27f7a9a5-d047-422b-9924-dcc3d434da12","Type":"ContainerStarted","Data":"322ab701da26f27bb52eaf653890f35da502863709fe5f0c3e682d9c49c18557"} Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.372833 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7fblc" event={"ID":"0298834e-f248-4e99-8f2d-7807c9421143","Type":"ContainerStarted","Data":"c9c6801cfdfa693dcbad2415f1615eb58e26da24827b6d2c3a44eaf73015e100"} Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.374529 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8n5f9" event={"ID":"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0","Type":"ContainerStarted","Data":"cb9dabed246e983ecc40863211f250f08e6419ab6286e89778a2acf7d9c82a0b"} Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.420585 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cxv76" podStartSLOduration=2.420563343 podStartE2EDuration="2.420563343s" podCreationTimestamp="2025-10-14 07:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:06:54.41244756 +0000 UTC m=+1162.323531386" watchObservedRunningTime="2025-10-14 07:06:54.420563343 +0000 UTC m=+1162.331647169" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.441165 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-m4pjj" podStartSLOduration=2.441143752 podStartE2EDuration="2.441143752s" podCreationTimestamp="2025-10-14 07:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:06:54.429970812 +0000 UTC m=+1162.341054618" watchObservedRunningTime="2025-10-14 07:06:54.441143752 +0000 UTC m=+1162.352227558" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.529870 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.531302 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.533164 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.533457 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-m27gd" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.537094 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.552357 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.591865 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-scripts\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.591910 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-logs\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.591948 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.591983 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-678pg\" (UniqueName: \"kubernetes.io/projected/73e21f83-8001-4f19-aa07-20f5bc3f7700-kube-api-access-678pg\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.592125 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.592220 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.592278 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-config-data\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.698492 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-678pg\" (UniqueName: \"kubernetes.io/projected/73e21f83-8001-4f19-aa07-20f5bc3f7700-kube-api-access-678pg\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.698550 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.698595 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.698637 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-config-data\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.698750 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-scripts\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.698777 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-logs\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.698826 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.699472 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.700867 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.701256 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-logs\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.707315 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.709365 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-config-data\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.713392 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-scripts\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.716471 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.722094 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-678pg\" (UniqueName: \"kubernetes.io/projected/73e21f83-8001-4f19-aa07-20f5bc3f7700-kube-api-access-678pg\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.749612 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:06:54 crc kubenswrapper[5058]: E1014 07:06:54.751702 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f383743f-e21b-4e88-a11a-8555f0254364" containerName="init" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.751725 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f383743f-e21b-4e88-a11a-8555f0254364" containerName="init" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.751929 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f383743f-e21b-4e88-a11a-8555f0254364" containerName="init" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.752937 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.756375 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.760179 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.770944 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.800336 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-sb\") pod \"f383743f-e21b-4e88-a11a-8555f0254364\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.800845 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-swift-storage-0\") pod \"f383743f-e21b-4e88-a11a-8555f0254364\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.800907 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-nb\") pod \"f383743f-e21b-4e88-a11a-8555f0254364\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.800937 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-config\") pod \"f383743f-e21b-4e88-a11a-8555f0254364\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.801033 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-svc\") pod \"f383743f-e21b-4e88-a11a-8555f0254364\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.801191 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lthnz\" (UniqueName: \"kubernetes.io/projected/f383743f-e21b-4e88-a11a-8555f0254364-kube-api-access-lthnz\") pod \"f383743f-e21b-4e88-a11a-8555f0254364\" (UID: \"f383743f-e21b-4e88-a11a-8555f0254364\") " Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.801476 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.801511 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-config-data\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.801572 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rx8g\" (UniqueName: \"kubernetes.io/projected/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-kube-api-access-7rx8g\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.801653 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.801676 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.801704 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-scripts\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.801781 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-logs\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.812742 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f383743f-e21b-4e88-a11a-8555f0254364-kube-api-access-lthnz" (OuterVolumeSpecName: "kube-api-access-lthnz") pod "f383743f-e21b-4e88-a11a-8555f0254364" (UID: "f383743f-e21b-4e88-a11a-8555f0254364"). InnerVolumeSpecName "kube-api-access-lthnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.836380 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-config" (OuterVolumeSpecName: "config") pod "f383743f-e21b-4e88-a11a-8555f0254364" (UID: "f383743f-e21b-4e88-a11a-8555f0254364"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.839142 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f383743f-e21b-4e88-a11a-8555f0254364" (UID: "f383743f-e21b-4e88-a11a-8555f0254364"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.850278 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f383743f-e21b-4e88-a11a-8555f0254364" (UID: "f383743f-e21b-4e88-a11a-8555f0254364"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.850612 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f383743f-e21b-4e88-a11a-8555f0254364" (UID: "f383743f-e21b-4e88-a11a-8555f0254364"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.853626 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f383743f-e21b-4e88-a11a-8555f0254364" (UID: "f383743f-e21b-4e88-a11a-8555f0254364"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.859339 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.910978 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911012 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911040 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-scripts\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911095 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-logs\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911121 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911153 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-config-data\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911195 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rx8g\" (UniqueName: \"kubernetes.io/projected/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-kube-api-access-7rx8g\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911257 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911268 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911277 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911288 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lthnz\" (UniqueName: \"kubernetes.io/projected/f383743f-e21b-4e88-a11a-8555f0254364-kube-api-access-lthnz\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911299 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911307 5058 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f383743f-e21b-4e88-a11a-8555f0254364-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911783 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.911968 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.916037 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.916561 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-logs\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.920296 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-config-data\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.942026 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rx8g\" (UniqueName: \"kubernetes.io/projected/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-kube-api-access-7rx8g\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:54 crc kubenswrapper[5058]: I1014 07:06:54.955114 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-scripts\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.038590 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.049658 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f7885f7f-gp8sj"] Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.070366 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.195645 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.240920 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.266454 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.404282 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" event={"ID":"e42fe5a2-62f8-490d-b873-b682c849cada","Type":"ContainerStarted","Data":"09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46"} Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.404931 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" event={"ID":"e42fe5a2-62f8-490d-b873-b682c849cada","Type":"ContainerStarted","Data":"3c6cce9de5e739cba0d9a84bed0832a25d9e835a1ec50944bca3da98c591c977"} Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.412784 5058 generic.go:334] "Generic (PLEG): container finished" podID="2cc07131-fe8b-4931-9146-b6977ceb175c" containerID="6773f8863044c1d64acd98bfe0a01e8378d0ec562b6e8866f303b2f6f69ece57" exitCode=0 Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.412870 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" event={"ID":"2cc07131-fe8b-4931-9146-b6977ceb175c","Type":"ContainerDied","Data":"6773f8863044c1d64acd98bfe0a01e8378d0ec562b6e8866f303b2f6f69ece57"} Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.412895 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" event={"ID":"2cc07131-fe8b-4931-9146-b6977ceb175c","Type":"ContainerStarted","Data":"4e167f72e8789acc1a5b67bff132cf52137ab6d3fb13b383059b98082228ed5e"} Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.416019 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jkqz7" event={"ID":"3e446e94-abcb-495b-be50-2100e77d2e4e","Type":"ContainerStarted","Data":"4c0934b8de50ca787136059d841a49f2b24c1a00638e49cb87021a0fb3d21b6f"} Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.420626 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.421106 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6dcd555f-ghldb" event={"ID":"f383743f-e21b-4e88-a11a-8555f0254364","Type":"ContainerDied","Data":"dd845e2ce67bdf3e1f6f9f60dd89cde06544fcb07e47b05bad751b464b22e019"} Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.421165 5058 scope.go:117] "RemoveContainer" containerID="6bd36a392736d72bf170237213fc17858140146728b316cf239a5730c71f5d83" Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.528134 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6dcd555f-ghldb"] Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.537690 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b6dcd555f-ghldb"] Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.617471 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:06:55 crc kubenswrapper[5058]: I1014 07:06:55.819264 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.029023 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.152893 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-svc\") pod \"2cc07131-fe8b-4931-9146-b6977ceb175c\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.152950 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-sb\") pod \"2cc07131-fe8b-4931-9146-b6977ceb175c\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.152994 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-nb\") pod \"2cc07131-fe8b-4931-9146-b6977ceb175c\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.153035 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-swift-storage-0\") pod \"2cc07131-fe8b-4931-9146-b6977ceb175c\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.153054 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29jr9\" (UniqueName: \"kubernetes.io/projected/2cc07131-fe8b-4931-9146-b6977ceb175c-kube-api-access-29jr9\") pod \"2cc07131-fe8b-4931-9146-b6977ceb175c\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.153077 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-config\") pod \"2cc07131-fe8b-4931-9146-b6977ceb175c\" (UID: \"2cc07131-fe8b-4931-9146-b6977ceb175c\") " Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.161408 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc07131-fe8b-4931-9146-b6977ceb175c-kube-api-access-29jr9" (OuterVolumeSpecName: "kube-api-access-29jr9") pod "2cc07131-fe8b-4931-9146-b6977ceb175c" (UID: "2cc07131-fe8b-4931-9146-b6977ceb175c"). InnerVolumeSpecName "kube-api-access-29jr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.206508 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-config" (OuterVolumeSpecName: "config") pod "2cc07131-fe8b-4931-9146-b6977ceb175c" (UID: "2cc07131-fe8b-4931-9146-b6977ceb175c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.211122 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2cc07131-fe8b-4931-9146-b6977ceb175c" (UID: "2cc07131-fe8b-4931-9146-b6977ceb175c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.213352 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2cc07131-fe8b-4931-9146-b6977ceb175c" (UID: "2cc07131-fe8b-4931-9146-b6977ceb175c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.214165 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cc07131-fe8b-4931-9146-b6977ceb175c" (UID: "2cc07131-fe8b-4931-9146-b6977ceb175c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.226984 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2cc07131-fe8b-4931-9146-b6977ceb175c" (UID: "2cc07131-fe8b-4931-9146-b6977ceb175c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.256962 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.256988 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.256999 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.257007 5058 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.257016 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29jr9\" (UniqueName: \"kubernetes.io/projected/2cc07131-fe8b-4931-9146-b6977ceb175c-kube-api-access-29jr9\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.257025 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc07131-fe8b-4931-9146-b6977ceb175c-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.452144 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" event={"ID":"2cc07131-fe8b-4931-9146-b6977ceb175c","Type":"ContainerDied","Data":"4e167f72e8789acc1a5b67bff132cf52137ab6d3fb13b383059b98082228ed5e"} Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.452178 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c75bdf7bf-txqvn" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.452254 5058 scope.go:117] "RemoveContainer" containerID="6773f8863044c1d64acd98bfe0a01e8378d0ec562b6e8866f303b2f6f69ece57" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.455082 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19e7ea2a-c4f4-407e-b3b8-d3d572130c71","Type":"ContainerStarted","Data":"6b9251f05ad82f95fcded158b8028aa0145c13a252ab41f4ef2baf263ef1e102"} Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.457896 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73e21f83-8001-4f19-aa07-20f5bc3f7700","Type":"ContainerStarted","Data":"f018bfbb8a12d348d807827514ca1c4527d34c3611505f8e555e2491133f140b"} Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.463494 5058 generic.go:334] "Generic (PLEG): container finished" podID="e42fe5a2-62f8-490d-b873-b682c849cada" containerID="09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46" exitCode=0 Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.463526 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" event={"ID":"e42fe5a2-62f8-490d-b873-b682c849cada","Type":"ContainerDied","Data":"09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46"} Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.463543 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" event={"ID":"e42fe5a2-62f8-490d-b873-b682c849cada","Type":"ContainerStarted","Data":"82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515"} Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.464386 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.488288 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" podStartSLOduration=3.488273044 podStartE2EDuration="3.488273044s" podCreationTimestamp="2025-10-14 07:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:06:56.481127569 +0000 UTC m=+1164.392211385" watchObservedRunningTime="2025-10-14 07:06:56.488273044 +0000 UTC m=+1164.399356850" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.533836 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c75bdf7bf-txqvn"] Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.541702 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c75bdf7bf-txqvn"] Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.806577 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc07131-fe8b-4931-9146-b6977ceb175c" path="/var/lib/kubelet/pods/2cc07131-fe8b-4931-9146-b6977ceb175c/volumes" Oct 14 07:06:56 crc kubenswrapper[5058]: I1014 07:06:56.807524 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f383743f-e21b-4e88-a11a-8555f0254364" path="/var/lib/kubelet/pods/f383743f-e21b-4e88-a11a-8555f0254364/volumes" Oct 14 07:06:57 crc kubenswrapper[5058]: I1014 07:06:57.503247 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19e7ea2a-c4f4-407e-b3b8-d3d572130c71","Type":"ContainerStarted","Data":"68d6b42f3edc4bc0b277feb9044d37353b4434a1fc435de149e8f8f61007b27c"} Oct 14 07:06:57 crc kubenswrapper[5058]: I1014 07:06:57.508382 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73e21f83-8001-4f19-aa07-20f5bc3f7700","Type":"ContainerStarted","Data":"4ab1702d57926051d71cff022e85a3270b24f02a5cba9e2d68ee87802fbb127d"} Oct 14 07:06:59 crc kubenswrapper[5058]: I1014 07:06:59.531747 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19e7ea2a-c4f4-407e-b3b8-d3d572130c71","Type":"ContainerStarted","Data":"62317f7396444045afe398ba481138b47e754f6db049c8b9df4842bdcbd3b589"} Oct 14 07:06:59 crc kubenswrapper[5058]: I1014 07:06:59.532037 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerName="glance-log" containerID="cri-o://68d6b42f3edc4bc0b277feb9044d37353b4434a1fc435de149e8f8f61007b27c" gracePeriod=30 Oct 14 07:06:59 crc kubenswrapper[5058]: I1014 07:06:59.532556 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerName="glance-httpd" containerID="cri-o://62317f7396444045afe398ba481138b47e754f6db049c8b9df4842bdcbd3b589" gracePeriod=30 Oct 14 07:06:59 crc kubenswrapper[5058]: I1014 07:06:59.544980 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73e21f83-8001-4f19-aa07-20f5bc3f7700","Type":"ContainerStarted","Data":"981f55bc2950644b5568f2df7a34102d6a8a0fca9f0f682f0f6d57a9327d7fac"} Oct 14 07:06:59 crc kubenswrapper[5058]: I1014 07:06:59.545158 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerName="glance-log" containerID="cri-o://4ab1702d57926051d71cff022e85a3270b24f02a5cba9e2d68ee87802fbb127d" gracePeriod=30 Oct 14 07:06:59 crc kubenswrapper[5058]: I1014 07:06:59.545464 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerName="glance-httpd" containerID="cri-o://981f55bc2950644b5568f2df7a34102d6a8a0fca9f0f682f0f6d57a9327d7fac" gracePeriod=30 Oct 14 07:06:59 crc kubenswrapper[5058]: I1014 07:06:59.588390 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.588373714 podStartE2EDuration="6.588373714s" podCreationTimestamp="2025-10-14 07:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:06:59.574995581 +0000 UTC m=+1167.486079387" watchObservedRunningTime="2025-10-14 07:06:59.588373714 +0000 UTC m=+1167.499457520" Oct 14 07:06:59 crc kubenswrapper[5058]: I1014 07:06:59.588754 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.588726274 podStartE2EDuration="6.588726274s" podCreationTimestamp="2025-10-14 07:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:06:59.559389494 +0000 UTC m=+1167.470473310" watchObservedRunningTime="2025-10-14 07:06:59.588726274 +0000 UTC m=+1167.499810090" Oct 14 07:07:00 crc kubenswrapper[5058]: I1014 07:07:00.557977 5058 generic.go:334] "Generic (PLEG): container finished" podID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerID="62317f7396444045afe398ba481138b47e754f6db049c8b9df4842bdcbd3b589" exitCode=0 Oct 14 07:07:00 crc kubenswrapper[5058]: I1014 07:07:00.558026 5058 generic.go:334] "Generic (PLEG): container finished" podID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerID="68d6b42f3edc4bc0b277feb9044d37353b4434a1fc435de149e8f8f61007b27c" exitCode=143 Oct 14 07:07:00 crc kubenswrapper[5058]: I1014 07:07:00.558086 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19e7ea2a-c4f4-407e-b3b8-d3d572130c71","Type":"ContainerDied","Data":"62317f7396444045afe398ba481138b47e754f6db049c8b9df4842bdcbd3b589"} Oct 14 07:07:00 crc kubenswrapper[5058]: I1014 07:07:00.558119 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19e7ea2a-c4f4-407e-b3b8-d3d572130c71","Type":"ContainerDied","Data":"68d6b42f3edc4bc0b277feb9044d37353b4434a1fc435de149e8f8f61007b27c"} Oct 14 07:07:00 crc kubenswrapper[5058]: I1014 07:07:00.561031 5058 generic.go:334] "Generic (PLEG): container finished" podID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerID="981f55bc2950644b5568f2df7a34102d6a8a0fca9f0f682f0f6d57a9327d7fac" exitCode=0 Oct 14 07:07:00 crc kubenswrapper[5058]: I1014 07:07:00.561060 5058 generic.go:334] "Generic (PLEG): container finished" podID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerID="4ab1702d57926051d71cff022e85a3270b24f02a5cba9e2d68ee87802fbb127d" exitCode=143 Oct 14 07:07:00 crc kubenswrapper[5058]: I1014 07:07:00.561062 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73e21f83-8001-4f19-aa07-20f5bc3f7700","Type":"ContainerDied","Data":"981f55bc2950644b5568f2df7a34102d6a8a0fca9f0f682f0f6d57a9327d7fac"} Oct 14 07:07:00 crc kubenswrapper[5058]: I1014 07:07:00.561099 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73e21f83-8001-4f19-aa07-20f5bc3f7700","Type":"ContainerDied","Data":"4ab1702d57926051d71cff022e85a3270b24f02a5cba9e2d68ee87802fbb127d"} Oct 14 07:07:01 crc kubenswrapper[5058]: I1014 07:07:01.570414 5058 generic.go:334] "Generic (PLEG): container finished" podID="27f7a9a5-d047-422b-9924-dcc3d434da12" containerID="0bb836c19aa15735f6bf1496006b97de7fede44e1beaf2c25a807476bc1dca66" exitCode=0 Oct 14 07:07:01 crc kubenswrapper[5058]: I1014 07:07:01.570775 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m4pjj" event={"ID":"27f7a9a5-d047-422b-9924-dcc3d434da12","Type":"ContainerDied","Data":"0bb836c19aa15735f6bf1496006b97de7fede44e1beaf2c25a807476bc1dca66"} Oct 14 07:07:04 crc kubenswrapper[5058]: I1014 07:07:04.210182 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:07:04 crc kubenswrapper[5058]: I1014 07:07:04.279332 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d9ddcb47c-s7r8b"] Oct 14 07:07:04 crc kubenswrapper[5058]: I1014 07:07:04.279608 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" podUID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerName="dnsmasq-dns" containerID="cri-o://31884091c1de1ca3725eda2705641674af18bc20436040f8d5545760d1d2cdd3" gracePeriod=10 Oct 14 07:07:04 crc kubenswrapper[5058]: I1014 07:07:04.598136 5058 generic.go:334] "Generic (PLEG): container finished" podID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerID="31884091c1de1ca3725eda2705641674af18bc20436040f8d5545760d1d2cdd3" exitCode=0 Oct 14 07:07:04 crc kubenswrapper[5058]: I1014 07:07:04.598182 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" event={"ID":"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9","Type":"ContainerDied","Data":"31884091c1de1ca3725eda2705641674af18bc20436040f8d5545760d1d2cdd3"} Oct 14 07:07:06 crc kubenswrapper[5058]: I1014 07:07:06.529001 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" podUID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Oct 14 07:07:11 crc kubenswrapper[5058]: I1014 07:07:11.528117 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" podUID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.226378 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.237261 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301134 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-logs\") pod \"73e21f83-8001-4f19-aa07-20f5bc3f7700\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301230 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-scripts\") pod \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301271 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-combined-ca-bundle\") pod \"73e21f83-8001-4f19-aa07-20f5bc3f7700\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301304 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301415 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-scripts\") pod \"73e21f83-8001-4f19-aa07-20f5bc3f7700\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301451 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-httpd-run\") pod \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301496 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-config-data\") pod \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301540 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rx8g\" (UniqueName: \"kubernetes.io/projected/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-kube-api-access-7rx8g\") pod \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301594 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-678pg\" (UniqueName: \"kubernetes.io/projected/73e21f83-8001-4f19-aa07-20f5bc3f7700-kube-api-access-678pg\") pod \"73e21f83-8001-4f19-aa07-20f5bc3f7700\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301630 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-httpd-run\") pod \"73e21f83-8001-4f19-aa07-20f5bc3f7700\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301664 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-logs\") pod \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301728 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-combined-ca-bundle\") pod \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\" (UID: \"19e7ea2a-c4f4-407e-b3b8-d3d572130c71\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301783 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"73e21f83-8001-4f19-aa07-20f5bc3f7700\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301855 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-config-data\") pod \"73e21f83-8001-4f19-aa07-20f5bc3f7700\" (UID: \"73e21f83-8001-4f19-aa07-20f5bc3f7700\") " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.301865 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-logs" (OuterVolumeSpecName: "logs") pod "73e21f83-8001-4f19-aa07-20f5bc3f7700" (UID: "73e21f83-8001-4f19-aa07-20f5bc3f7700"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.302348 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.303171 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "19e7ea2a-c4f4-407e-b3b8-d3d572130c71" (UID: "19e7ea2a-c4f4-407e-b3b8-d3d572130c71"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.303217 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-logs" (OuterVolumeSpecName: "logs") pod "19e7ea2a-c4f4-407e-b3b8-d3d572130c71" (UID: "19e7ea2a-c4f4-407e-b3b8-d3d572130c71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.303352 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "73e21f83-8001-4f19-aa07-20f5bc3f7700" (UID: "73e21f83-8001-4f19-aa07-20f5bc3f7700"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.308144 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e21f83-8001-4f19-aa07-20f5bc3f7700-kube-api-access-678pg" (OuterVolumeSpecName: "kube-api-access-678pg") pod "73e21f83-8001-4f19-aa07-20f5bc3f7700" (UID: "73e21f83-8001-4f19-aa07-20f5bc3f7700"). InnerVolumeSpecName "kube-api-access-678pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.309575 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "19e7ea2a-c4f4-407e-b3b8-d3d572130c71" (UID: "19e7ea2a-c4f4-407e-b3b8-d3d572130c71"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.309739 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-scripts" (OuterVolumeSpecName: "scripts") pod "19e7ea2a-c4f4-407e-b3b8-d3d572130c71" (UID: "19e7ea2a-c4f4-407e-b3b8-d3d572130c71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.328151 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "73e21f83-8001-4f19-aa07-20f5bc3f7700" (UID: "73e21f83-8001-4f19-aa07-20f5bc3f7700"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.328170 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-kube-api-access-7rx8g" (OuterVolumeSpecName: "kube-api-access-7rx8g") pod "19e7ea2a-c4f4-407e-b3b8-d3d572130c71" (UID: "19e7ea2a-c4f4-407e-b3b8-d3d572130c71"). InnerVolumeSpecName "kube-api-access-7rx8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.328171 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-scripts" (OuterVolumeSpecName: "scripts") pod "73e21f83-8001-4f19-aa07-20f5bc3f7700" (UID: "73e21f83-8001-4f19-aa07-20f5bc3f7700"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.336504 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73e21f83-8001-4f19-aa07-20f5bc3f7700" (UID: "73e21f83-8001-4f19-aa07-20f5bc3f7700"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.347328 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19e7ea2a-c4f4-407e-b3b8-d3d572130c71" (UID: "19e7ea2a-c4f4-407e-b3b8-d3d572130c71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.377999 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-config-data" (OuterVolumeSpecName: "config-data") pod "73e21f83-8001-4f19-aa07-20f5bc3f7700" (UID: "73e21f83-8001-4f19-aa07-20f5bc3f7700"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.385874 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-config-data" (OuterVolumeSpecName: "config-data") pod "19e7ea2a-c4f4-407e-b3b8-d3d572130c71" (UID: "19e7ea2a-c4f4-407e-b3b8-d3d572130c71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404135 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404184 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404200 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404218 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rx8g\" (UniqueName: \"kubernetes.io/projected/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-kube-api-access-7rx8g\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404232 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-678pg\" (UniqueName: \"kubernetes.io/projected/73e21f83-8001-4f19-aa07-20f5bc3f7700-kube-api-access-678pg\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404243 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73e21f83-8001-4f19-aa07-20f5bc3f7700-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404254 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404265 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404305 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404318 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404331 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e7ea2a-c4f4-407e-b3b8-d3d572130c71-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404342 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e21f83-8001-4f19-aa07-20f5bc3f7700-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.404359 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.434712 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.436366 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.506150 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.506189 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.685474 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19e7ea2a-c4f4-407e-b3b8-d3d572130c71","Type":"ContainerDied","Data":"6b9251f05ad82f95fcded158b8028aa0145c13a252ab41f4ef2baf263ef1e102"} Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.685513 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.685529 5058 scope.go:117] "RemoveContainer" containerID="62317f7396444045afe398ba481138b47e754f6db049c8b9df4842bdcbd3b589" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.688846 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73e21f83-8001-4f19-aa07-20f5bc3f7700","Type":"ContainerDied","Data":"f018bfbb8a12d348d807827514ca1c4527d34c3611505f8e555e2491133f140b"} Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.688893 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.730919 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.772811 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.778242 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:07:12 crc kubenswrapper[5058]: E1014 07:07:12.778669 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerName="glance-log" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.778697 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerName="glance-log" Oct 14 07:07:12 crc kubenswrapper[5058]: E1014 07:07:12.778734 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerName="glance-httpd" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.778743 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerName="glance-httpd" Oct 14 07:07:12 crc kubenswrapper[5058]: E1014 07:07:12.778768 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc07131-fe8b-4931-9146-b6977ceb175c" containerName="init" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.778777 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc07131-fe8b-4931-9146-b6977ceb175c" containerName="init" Oct 14 07:07:12 crc kubenswrapper[5058]: E1014 07:07:12.778789 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerName="glance-log" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.778818 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerName="glance-log" Oct 14 07:07:12 crc kubenswrapper[5058]: E1014 07:07:12.778829 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerName="glance-httpd" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.778836 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerName="glance-httpd" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.779301 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc07131-fe8b-4931-9146-b6977ceb175c" containerName="init" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.779327 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerName="glance-log" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.779354 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerName="glance-log" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.779364 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" containerName="glance-httpd" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.779381 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e21f83-8001-4f19-aa07-20f5bc3f7700" containerName="glance-httpd" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.780429 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.782927 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.785943 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.786116 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.786192 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-m27gd" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.818604 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.818654 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-logs\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.818679 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bft7g\" (UniqueName: \"kubernetes.io/projected/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-kube-api-access-bft7g\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.818756 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.818785 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.818883 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.818917 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.819076 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.822931 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e7ea2a-c4f4-407e-b3b8-d3d572130c71" path="/var/lib/kubelet/pods/19e7ea2a-c4f4-407e-b3b8-d3d572130c71/volumes" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.823693 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.823732 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.823755 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.833901 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.835613 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.839660 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.840391 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.851164 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.919890 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.919938 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.919960 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-logs\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.919979 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bft7g\" (UniqueName: \"kubernetes.io/projected/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-kube-api-access-bft7g\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.920010 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.920030 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.920074 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.920091 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.920214 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.921163 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-logs\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.921503 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.923878 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.924252 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: E1014 07:07:12.928337 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:121a845dacd051814fb4709fc557420363cd923a9cf2b4ed09addd394f83a3f5" Oct 14 07:07:12 crc kubenswrapper[5058]: E1014 07:07:12.928466 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:121a845dacd051814fb4709fc557420363cd923a9cf2b4ed09addd394f83a3f5,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9rg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-7fblc_openstack(0298834e-f248-4e99-8f2d-7807c9421143): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 07:07:12 crc kubenswrapper[5058]: E1014 07:07:12.930392 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-7fblc" podUID="0298834e-f248-4e99-8f2d-7807c9421143" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.932698 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.933617 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.934711 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.943359 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bft7g\" (UniqueName: \"kubernetes.io/projected/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-kube-api-access-bft7g\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:12 crc kubenswrapper[5058]: I1014 07:07:12.965090 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.022938 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.022994 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.023023 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-logs\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.023066 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2h8v\" (UniqueName: \"kubernetes.io/projected/d7775098-57ac-4e54-8b25-ce55dc5d2705-kube-api-access-n2h8v\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.023149 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.023214 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.023274 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-scripts\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.023327 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-config-data\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.124374 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.124543 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-credential-keys\") pod \"27f7a9a5-d047-422b-9924-dcc3d434da12\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.125435 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-config-data\") pod \"27f7a9a5-d047-422b-9924-dcc3d434da12\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.125474 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-combined-ca-bundle\") pod \"27f7a9a5-d047-422b-9924-dcc3d434da12\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.125582 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbth7\" (UniqueName: \"kubernetes.io/projected/27f7a9a5-d047-422b-9924-dcc3d434da12-kube-api-access-mbth7\") pod \"27f7a9a5-d047-422b-9924-dcc3d434da12\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.125654 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-fernet-keys\") pod \"27f7a9a5-d047-422b-9924-dcc3d434da12\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.125702 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-scripts\") pod \"27f7a9a5-d047-422b-9924-dcc3d434da12\" (UID: \"27f7a9a5-d047-422b-9924-dcc3d434da12\") " Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.126061 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.126157 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.126231 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-scripts\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.126254 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-config-data\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.126459 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.126486 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.126513 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-logs\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.126568 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2h8v\" (UniqueName: \"kubernetes.io/projected/d7775098-57ac-4e54-8b25-ce55dc5d2705-kube-api-access-n2h8v\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.127347 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.139466 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-logs\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.139991 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.146503 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "27f7a9a5-d047-422b-9924-dcc3d434da12" (UID: "27f7a9a5-d047-422b-9924-dcc3d434da12"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.151945 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.165619 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-scripts" (OuterVolumeSpecName: "scripts") pod "27f7a9a5-d047-422b-9924-dcc3d434da12" (UID: "27f7a9a5-d047-422b-9924-dcc3d434da12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.165772 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f7a9a5-d047-422b-9924-dcc3d434da12-kube-api-access-mbth7" (OuterVolumeSpecName: "kube-api-access-mbth7") pod "27f7a9a5-d047-422b-9924-dcc3d434da12" (UID: "27f7a9a5-d047-422b-9924-dcc3d434da12"). InnerVolumeSpecName "kube-api-access-mbth7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.165859 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "27f7a9a5-d047-422b-9924-dcc3d434da12" (UID: "27f7a9a5-d047-422b-9924-dcc3d434da12"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.165881 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.166290 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-scripts\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.168282 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2h8v\" (UniqueName: \"kubernetes.io/projected/d7775098-57ac-4e54-8b25-ce55dc5d2705-kube-api-access-n2h8v\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.180921 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-config-data\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.186019 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-config-data" (OuterVolumeSpecName: "config-data") pod "27f7a9a5-d047-422b-9924-dcc3d434da12" (UID: "27f7a9a5-d047-422b-9924-dcc3d434da12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.190918 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.219445 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27f7a9a5-d047-422b-9924-dcc3d434da12" (UID: "27f7a9a5-d047-422b-9924-dcc3d434da12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.228446 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbth7\" (UniqueName: \"kubernetes.io/projected/27f7a9a5-d047-422b-9924-dcc3d434da12-kube-api-access-mbth7\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.228715 5058 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.228839 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.228896 5058 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.228960 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.229017 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f7a9a5-d047-422b-9924-dcc3d434da12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.454924 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.700603 5058 generic.go:334] "Generic (PLEG): container finished" podID="15c0ea98-8f40-4281-b309-4c3994b849ba" containerID="b42167829db655ee29d561f092b8f822ca55d825ad8a0a2d947e23d43b26c50a" exitCode=0 Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.700688 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cxv76" event={"ID":"15c0ea98-8f40-4281-b309-4c3994b849ba","Type":"ContainerDied","Data":"b42167829db655ee29d561f092b8f822ca55d825ad8a0a2d947e23d43b26c50a"} Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.703089 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m4pjj" Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.705152 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m4pjj" event={"ID":"27f7a9a5-d047-422b-9924-dcc3d434da12","Type":"ContainerDied","Data":"322ab701da26f27bb52eaf653890f35da502863709fe5f0c3e682d9c49c18557"} Oct 14 07:07:13 crc kubenswrapper[5058]: I1014 07:07:13.705220 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322ab701da26f27bb52eaf653890f35da502863709fe5f0c3e682d9c49c18557" Oct 14 07:07:13 crc kubenswrapper[5058]: E1014 07:07:13.706040 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:121a845dacd051814fb4709fc557420363cd923a9cf2b4ed09addd394f83a3f5\\\"\"" pod="openstack/barbican-db-sync-7fblc" podUID="0298834e-f248-4e99-8f2d-7807c9421143" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.017538 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-m4pjj"] Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.024292 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-m4pjj"] Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.107057 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2nnd9"] Oct 14 07:07:14 crc kubenswrapper[5058]: E1014 07:07:14.107953 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f7a9a5-d047-422b-9924-dcc3d434da12" containerName="keystone-bootstrap" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.108001 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f7a9a5-d047-422b-9924-dcc3d434da12" containerName="keystone-bootstrap" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.108415 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f7a9a5-d047-422b-9924-dcc3d434da12" containerName="keystone-bootstrap" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.109355 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.114344 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.114649 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.114993 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.115699 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xv7zn" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.116368 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2nnd9"] Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.198071 5058 scope.go:117] "RemoveContainer" containerID="68d6b42f3edc4bc0b277feb9044d37353b4434a1fc435de149e8f8f61007b27c" Oct 14 07:07:14 crc kubenswrapper[5058]: E1014 07:07:14.223719 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:93b475af15a0d10e95cb17b98927077f05ac24c89472a601d677eb89f82fd429" Oct 14 07:07:14 crc kubenswrapper[5058]: E1014 07:07:14.223986 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:93b475af15a0d10e95cb17b98927077f05ac24c89472a601d677eb89f82fd429,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5bqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8n5f9_openstack(74e6ff86-f483-424b-9f6d-5f6d0c1e81b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 07:07:14 crc kubenswrapper[5058]: E1014 07:07:14.227394 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8n5f9" podUID="74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.247053 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv2k2\" (UniqueName: \"kubernetes.io/projected/9c90a327-1079-4375-986f-7becd72aaaad-kube-api-access-nv2k2\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.247187 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-credential-keys\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.247231 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-scripts\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.247275 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-config-data\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.247584 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-fernet-keys\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.247650 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-combined-ca-bundle\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.349079 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-fernet-keys\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.349442 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-combined-ca-bundle\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.349498 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv2k2\" (UniqueName: \"kubernetes.io/projected/9c90a327-1079-4375-986f-7becd72aaaad-kube-api-access-nv2k2\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.350990 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-credential-keys\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.351055 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-scripts\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.351125 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-config-data\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.361788 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-credential-keys\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.362378 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-scripts\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.363087 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-config-data\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.363463 5058 scope.go:117] "RemoveContainer" containerID="981f55bc2950644b5568f2df7a34102d6a8a0fca9f0f682f0f6d57a9327d7fac" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.363642 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-fernet-keys\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.366531 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-combined-ca-bundle\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.370457 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv2k2\" (UniqueName: \"kubernetes.io/projected/9c90a327-1079-4375-986f-7becd72aaaad-kube-api-access-nv2k2\") pod \"keystone-bootstrap-2nnd9\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.374420 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.386728 5058 scope.go:117] "RemoveContainer" containerID="4ab1702d57926051d71cff022e85a3270b24f02a5cba9e2d68ee87802fbb127d" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.432220 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.456462 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-sb\") pod \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.456845 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-swift-storage-0\") pod \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.456911 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-config\") pod \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.456951 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-svc\") pod \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.456985 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9jxx\" (UniqueName: \"kubernetes.io/projected/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-kube-api-access-s9jxx\") pod \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.457019 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-nb\") pod \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\" (UID: \"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9\") " Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.462841 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-kube-api-access-s9jxx" (OuterVolumeSpecName: "kube-api-access-s9jxx") pod "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" (UID: "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9"). InnerVolumeSpecName "kube-api-access-s9jxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.502204 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" (UID: "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.511979 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" (UID: "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.514425 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" (UID: "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.526908 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-config" (OuterVolumeSpecName: "config") pod "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" (UID: "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.527343 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" (UID: "cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.559483 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.559518 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.559560 5058 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.559574 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.559585 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.559597 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9jxx\" (UniqueName: \"kubernetes.io/projected/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9-kube-api-access-s9jxx\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.711663 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jkqz7" event={"ID":"3e446e94-abcb-495b-be50-2100e77d2e4e","Type":"ContainerStarted","Data":"e9c65bbef4be0b4564dab46e8756ca50ac118a12115c200ffc6633fac603b802"} Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.716445 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.716445 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9ddcb47c-s7r8b" event={"ID":"cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9","Type":"ContainerDied","Data":"d44374347fede8a77c6dcecb0a1de3b895c837600114d0954d7b423f09480169"} Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.716583 5058 scope.go:117] "RemoveContainer" containerID="31884091c1de1ca3725eda2705641674af18bc20436040f8d5545760d1d2cdd3" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.722694 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerStarted","Data":"939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274"} Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.746057 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jkqz7" podStartSLOduration=2.8711907549999998 podStartE2EDuration="22.746037761s" podCreationTimestamp="2025-10-14 07:06:52 +0000 UTC" firstStartedPulling="2025-10-14 07:06:54.349048635 +0000 UTC m=+1162.260132441" lastFinishedPulling="2025-10-14 07:07:14.223895621 +0000 UTC m=+1182.134979447" observedRunningTime="2025-10-14 07:07:14.73938563 +0000 UTC m=+1182.650469476" watchObservedRunningTime="2025-10-14 07:07:14.746037761 +0000 UTC m=+1182.657121577" Oct 14 07:07:14 crc kubenswrapper[5058]: E1014 07:07:14.761247 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:93b475af15a0d10e95cb17b98927077f05ac24c89472a601d677eb89f82fd429\\\"\"" pod="openstack/cinder-db-sync-8n5f9" podUID="74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.761476 5058 scope.go:117] "RemoveContainer" containerID="c0afe83127c3ff80e32efae5fbf2586316b4e6f66da08ad70f4139a81db5b62b" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.780259 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d9ddcb47c-s7r8b"] Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.780320 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d9ddcb47c-s7r8b"] Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.811138 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f7a9a5-d047-422b-9924-dcc3d434da12" path="/var/lib/kubelet/pods/27f7a9a5-d047-422b-9924-dcc3d434da12/volumes" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.812932 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e21f83-8001-4f19-aa07-20f5bc3f7700" path="/var/lib/kubelet/pods/73e21f83-8001-4f19-aa07-20f5bc3f7700/volumes" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.814038 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" path="/var/lib/kubelet/pods/cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9/volumes" Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.906728 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2nnd9"] Oct 14 07:07:14 crc kubenswrapper[5058]: W1014 07:07:14.912970 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c90a327_1079_4375_986f_7becd72aaaad.slice/crio-cd6714d54748b54589f4cd4f8a151d6fc090440e913276f51ab99b6f979dabaf WatchSource:0}: Error finding container cd6714d54748b54589f4cd4f8a151d6fc090440e913276f51ab99b6f979dabaf: Status 404 returned error can't find the container with id cd6714d54748b54589f4cd4f8a151d6fc090440e913276f51ab99b6f979dabaf Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.975754 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:07:14 crc kubenswrapper[5058]: I1014 07:07:14.995816 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cxv76" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.070129 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-config\") pod \"15c0ea98-8f40-4281-b309-4c3994b849ba\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.070356 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjn62\" (UniqueName: \"kubernetes.io/projected/15c0ea98-8f40-4281-b309-4c3994b849ba-kube-api-access-kjn62\") pod \"15c0ea98-8f40-4281-b309-4c3994b849ba\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.070411 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-combined-ca-bundle\") pod \"15c0ea98-8f40-4281-b309-4c3994b849ba\" (UID: \"15c0ea98-8f40-4281-b309-4c3994b849ba\") " Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.076587 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c0ea98-8f40-4281-b309-4c3994b849ba-kube-api-access-kjn62" (OuterVolumeSpecName: "kube-api-access-kjn62") pod "15c0ea98-8f40-4281-b309-4c3994b849ba" (UID: "15c0ea98-8f40-4281-b309-4c3994b849ba"). InnerVolumeSpecName "kube-api-access-kjn62". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.101933 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-config" (OuterVolumeSpecName: "config") pod "15c0ea98-8f40-4281-b309-4c3994b849ba" (UID: "15c0ea98-8f40-4281-b309-4c3994b849ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.106737 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15c0ea98-8f40-4281-b309-4c3994b849ba" (UID: "15c0ea98-8f40-4281-b309-4c3994b849ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.172458 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjn62\" (UniqueName: \"kubernetes.io/projected/15c0ea98-8f40-4281-b309-4c3994b849ba-kube-api-access-kjn62\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.172510 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.172524 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/15c0ea98-8f40-4281-b309-4c3994b849ba-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.547532 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.784403 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2nnd9" event={"ID":"9c90a327-1079-4375-986f-7becd72aaaad","Type":"ContainerStarted","Data":"1a2e702ac777627cc9b1f6f7b7f658646716149be400400fe2ddfa4f64dbf7eb"} Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.784477 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2nnd9" event={"ID":"9c90a327-1079-4375-986f-7becd72aaaad","Type":"ContainerStarted","Data":"cd6714d54748b54589f4cd4f8a151d6fc090440e913276f51ab99b6f979dabaf"} Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.805716 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cxv76" event={"ID":"15c0ea98-8f40-4281-b309-4c3994b849ba","Type":"ContainerDied","Data":"f737b233cb9191d93a8b3cf433dc50a6218d343930ddc33020275f02eb92cb05"} Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.805755 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f737b233cb9191d93a8b3cf433dc50a6218d343930ddc33020275f02eb92cb05" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.805776 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cxv76" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.818572 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2nnd9" podStartSLOduration=1.8185557989999999 podStartE2EDuration="1.818555799s" podCreationTimestamp="2025-10-14 07:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:15.802773277 +0000 UTC m=+1183.713857123" watchObservedRunningTime="2025-10-14 07:07:15.818555799 +0000 UTC m=+1183.729639595" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.836514 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7775098-57ac-4e54-8b25-ce55dc5d2705","Type":"ContainerStarted","Data":"c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070"} Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.836567 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7775098-57ac-4e54-8b25-ce55dc5d2705","Type":"ContainerStarted","Data":"c91733e14c017bb81f8d77921584b26ac7565ed5c883c08960aade75ba81632c"} Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.838882 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2","Type":"ContainerStarted","Data":"36864e0bee0965c6993bf5930ccc9c33cbbab77ac7fc4ff4d5a49fc0173b4657"} Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.868769 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-649d884857-tv6w2"] Oct 14 07:07:15 crc kubenswrapper[5058]: E1014 07:07:15.869259 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerName="dnsmasq-dns" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.869276 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerName="dnsmasq-dns" Oct 14 07:07:15 crc kubenswrapper[5058]: E1014 07:07:15.869286 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c0ea98-8f40-4281-b309-4c3994b849ba" containerName="neutron-db-sync" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.869294 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c0ea98-8f40-4281-b309-4c3994b849ba" containerName="neutron-db-sync" Oct 14 07:07:15 crc kubenswrapper[5058]: E1014 07:07:15.869333 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerName="init" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.869342 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerName="init" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.869721 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8ea136-fa18-4c7c-92cf-67f1ae3c55a9" containerName="dnsmasq-dns" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.869750 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c0ea98-8f40-4281-b309-4c3994b849ba" containerName="neutron-db-sync" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.870956 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.899307 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-649d884857-tv6w2"] Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.972377 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-679d46664d-5nhrm"] Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.977006 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.980901 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fhksz" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.981184 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.981233 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.981294 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.986003 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-679d46664d-5nhrm"] Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.994321 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhxq\" (UniqueName: \"kubernetes.io/projected/0998b4f5-60ea-4742-b333-e4de1dad4e5f-kube-api-access-5nhxq\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.994397 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-config\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.994490 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-nb\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.994862 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-sb\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.996099 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-swift-storage-0\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:15 crc kubenswrapper[5058]: I1014 07:07:15.996214 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-svc\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.100835 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-httpd-config\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.100920 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-combined-ca-bundle\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.100969 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhxq\" (UniqueName: \"kubernetes.io/projected/0998b4f5-60ea-4742-b333-e4de1dad4e5f-kube-api-access-5nhxq\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.100998 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-config\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.101024 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-ovndb-tls-certs\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.101062 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-config\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.101084 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/a986188b-990b-4314-b1cd-a6586b8bea94-kube-api-access-69q6g\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.101103 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-nb\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.101141 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-sb\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.101161 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-swift-storage-0\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.101218 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-svc\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.102313 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-svc\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.102549 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-config\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.103012 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-sb\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.103658 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-swift-storage-0\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.103729 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-nb\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.132135 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhxq\" (UniqueName: \"kubernetes.io/projected/0998b4f5-60ea-4742-b333-e4de1dad4e5f-kube-api-access-5nhxq\") pod \"dnsmasq-dns-649d884857-tv6w2\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.202875 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-ovndb-tls-certs\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.202967 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-config\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.203005 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/a986188b-990b-4314-b1cd-a6586b8bea94-kube-api-access-69q6g\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.203122 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-httpd-config\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.203160 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-combined-ca-bundle\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.216662 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-combined-ca-bundle\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.216982 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-config\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.219585 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-httpd-config\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.220605 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-ovndb-tls-certs\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.250810 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/a986188b-990b-4314-b1cd-a6586b8bea94-kube-api-access-69q6g\") pod \"neutron-679d46664d-5nhrm\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.266415 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.309774 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.856641 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7775098-57ac-4e54-8b25-ce55dc5d2705","Type":"ContainerStarted","Data":"c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd"} Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.858652 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2","Type":"ContainerStarted","Data":"22d0e119b6d953d94a386b6ad80d277a41637914a464324bd83ec47b8122ab60"} Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.860468 5058 generic.go:334] "Generic (PLEG): container finished" podID="3e446e94-abcb-495b-be50-2100e77d2e4e" containerID="e9c65bbef4be0b4564dab46e8756ca50ac118a12115c200ffc6633fac603b802" exitCode=0 Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.861357 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jkqz7" event={"ID":"3e446e94-abcb-495b-be50-2100e77d2e4e","Type":"ContainerDied","Data":"e9c65bbef4be0b4564dab46e8756ca50ac118a12115c200ffc6633fac603b802"} Oct 14 07:07:16 crc kubenswrapper[5058]: I1014 07:07:16.876779 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.876764276 podStartE2EDuration="4.876764276s" podCreationTimestamp="2025-10-14 07:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:16.875959623 +0000 UTC m=+1184.787043439" watchObservedRunningTime="2025-10-14 07:07:16.876764276 +0000 UTC m=+1184.787848082" Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.046121 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-649d884857-tv6w2"] Oct 14 07:07:17 crc kubenswrapper[5058]: W1014 07:07:17.054753 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0998b4f5_60ea_4742_b333_e4de1dad4e5f.slice/crio-da6126f16517bf14fe27dd135d6749083715efd656c56ef5400f56a016a97177 WatchSource:0}: Error finding container da6126f16517bf14fe27dd135d6749083715efd656c56ef5400f56a016a97177: Status 404 returned error can't find the container with id da6126f16517bf14fe27dd135d6749083715efd656c56ef5400f56a016a97177 Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.231535 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-679d46664d-5nhrm"] Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.872751 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-679d46664d-5nhrm" event={"ID":"a986188b-990b-4314-b1cd-a6586b8bea94","Type":"ContainerStarted","Data":"1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7"} Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.873101 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-679d46664d-5nhrm" event={"ID":"a986188b-990b-4314-b1cd-a6586b8bea94","Type":"ContainerStarted","Data":"1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12"} Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.873117 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-679d46664d-5nhrm" event={"ID":"a986188b-990b-4314-b1cd-a6586b8bea94","Type":"ContainerStarted","Data":"4870b627584e0f2634001cff067f5c78383a6b7823a4a903201f53e3c9c74e07"} Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.873315 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.885022 5058 generic.go:334] "Generic (PLEG): container finished" podID="0998b4f5-60ea-4742-b333-e4de1dad4e5f" containerID="699c32a0319df221a5207c8b09c9daa103c1e68203b2494c7d5815d06d84a20c" exitCode=0 Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.885145 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-649d884857-tv6w2" event={"ID":"0998b4f5-60ea-4742-b333-e4de1dad4e5f","Type":"ContainerDied","Data":"699c32a0319df221a5207c8b09c9daa103c1e68203b2494c7d5815d06d84a20c"} Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.885178 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-649d884857-tv6w2" event={"ID":"0998b4f5-60ea-4742-b333-e4de1dad4e5f","Type":"ContainerStarted","Data":"da6126f16517bf14fe27dd135d6749083715efd656c56ef5400f56a016a97177"} Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.897948 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-679d46664d-5nhrm" podStartSLOduration=2.897928124 podStartE2EDuration="2.897928124s" podCreationTimestamp="2025-10-14 07:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:17.890913743 +0000 UTC m=+1185.801997549" watchObservedRunningTime="2025-10-14 07:07:17.897928124 +0000 UTC m=+1185.809011950" Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.906145 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2","Type":"ContainerStarted","Data":"f4c19c60ec6d3a4d8a03bc9568c6fb719e8e7f9d8261834a127d07601bddbda1"} Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.921490 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerStarted","Data":"7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb"} Oct 14 07:07:17 crc kubenswrapper[5058]: I1014 07:07:17.975639 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.975620778 podStartE2EDuration="5.975620778s" podCreationTimestamp="2025-10-14 07:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:17.941228523 +0000 UTC m=+1185.852312339" watchObservedRunningTime="2025-10-14 07:07:17.975620778 +0000 UTC m=+1185.886704584" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.380664 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jkqz7" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.444443 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-combined-ca-bundle\") pod \"3e446e94-abcb-495b-be50-2100e77d2e4e\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.444482 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-config-data\") pod \"3e446e94-abcb-495b-be50-2100e77d2e4e\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.444513 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7wcj\" (UniqueName: \"kubernetes.io/projected/3e446e94-abcb-495b-be50-2100e77d2e4e-kube-api-access-t7wcj\") pod \"3e446e94-abcb-495b-be50-2100e77d2e4e\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.444579 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-scripts\") pod \"3e446e94-abcb-495b-be50-2100e77d2e4e\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.444605 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e446e94-abcb-495b-be50-2100e77d2e4e-logs\") pod \"3e446e94-abcb-495b-be50-2100e77d2e4e\" (UID: \"3e446e94-abcb-495b-be50-2100e77d2e4e\") " Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.445211 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e446e94-abcb-495b-be50-2100e77d2e4e-logs" (OuterVolumeSpecName: "logs") pod "3e446e94-abcb-495b-be50-2100e77d2e4e" (UID: "3e446e94-abcb-495b-be50-2100e77d2e4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.449291 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-scripts" (OuterVolumeSpecName: "scripts") pod "3e446e94-abcb-495b-be50-2100e77d2e4e" (UID: "3e446e94-abcb-495b-be50-2100e77d2e4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.450534 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e446e94-abcb-495b-be50-2100e77d2e4e-kube-api-access-t7wcj" (OuterVolumeSpecName: "kube-api-access-t7wcj") pod "3e446e94-abcb-495b-be50-2100e77d2e4e" (UID: "3e446e94-abcb-495b-be50-2100e77d2e4e"). InnerVolumeSpecName "kube-api-access-t7wcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.472316 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-config-data" (OuterVolumeSpecName: "config-data") pod "3e446e94-abcb-495b-be50-2100e77d2e4e" (UID: "3e446e94-abcb-495b-be50-2100e77d2e4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.475691 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e446e94-abcb-495b-be50-2100e77d2e4e" (UID: "3e446e94-abcb-495b-be50-2100e77d2e4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.547893 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.547928 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7wcj\" (UniqueName: \"kubernetes.io/projected/3e446e94-abcb-495b-be50-2100e77d2e4e-kube-api-access-t7wcj\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.547938 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.547946 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e446e94-abcb-495b-be50-2100e77d2e4e-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.547953 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e446e94-abcb-495b-be50-2100e77d2e4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.775975 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b8d77dbf7-m7vgh"] Oct 14 07:07:18 crc kubenswrapper[5058]: E1014 07:07:18.776970 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e446e94-abcb-495b-be50-2100e77d2e4e" containerName="placement-db-sync" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.776988 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e446e94-abcb-495b-be50-2100e77d2e4e" containerName="placement-db-sync" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.777163 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e446e94-abcb-495b-be50-2100e77d2e4e" containerName="placement-db-sync" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.778080 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.781117 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.781405 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.788685 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8d77dbf7-m7vgh"] Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.856025 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-config\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.856064 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-combined-ca-bundle\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.856099 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-internal-tls-certs\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.856266 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-ovndb-tls-certs\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.856348 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-httpd-config\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.856391 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7c7h\" (UniqueName: \"kubernetes.io/projected/b7c83841-4ff0-41a0-be22-72af1a0f2bef-kube-api-access-z7c7h\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.856493 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-public-tls-certs\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.942516 5058 generic.go:334] "Generic (PLEG): container finished" podID="9c90a327-1079-4375-986f-7becd72aaaad" containerID="1a2e702ac777627cc9b1f6f7b7f658646716149be400400fe2ddfa4f64dbf7eb" exitCode=0 Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.942621 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2nnd9" event={"ID":"9c90a327-1079-4375-986f-7becd72aaaad","Type":"ContainerDied","Data":"1a2e702ac777627cc9b1f6f7b7f658646716149be400400fe2ddfa4f64dbf7eb"} Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.947184 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jkqz7" event={"ID":"3e446e94-abcb-495b-be50-2100e77d2e4e","Type":"ContainerDied","Data":"4c0934b8de50ca787136059d841a49f2b24c1a00638e49cb87021a0fb3d21b6f"} Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.947218 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c0934b8de50ca787136059d841a49f2b24c1a00638e49cb87021a0fb3d21b6f" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.947237 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jkqz7" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.953022 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-649d884857-tv6w2" event={"ID":"0998b4f5-60ea-4742-b333-e4de1dad4e5f","Type":"ContainerStarted","Data":"81847f1c4c49de2d93ce7df35b6fbf3e9f6055186bcf98bd822133f2ff45d3d1"} Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.953452 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.960983 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-internal-tls-certs\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.961071 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-ovndb-tls-certs\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.961114 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-httpd-config\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.961152 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7c7h\" (UniqueName: \"kubernetes.io/projected/b7c83841-4ff0-41a0-be22-72af1a0f2bef-kube-api-access-z7c7h\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.961216 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-public-tls-certs\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.961749 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-config\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.961817 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-combined-ca-bundle\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.975340 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-ovndb-tls-certs\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.977197 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-combined-ca-bundle\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.983184 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-httpd-config\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.985127 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-config\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.988420 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-internal-tls-certs\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.995020 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-public-tls-certs\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:18 crc kubenswrapper[5058]: I1014 07:07:18.998102 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7c7h\" (UniqueName: \"kubernetes.io/projected/b7c83841-4ff0-41a0-be22-72af1a0f2bef-kube-api-access-z7c7h\") pod \"neutron-6b8d77dbf7-m7vgh\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.011383 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-649d884857-tv6w2" podStartSLOduration=4.011339193 podStartE2EDuration="4.011339193s" podCreationTimestamp="2025-10-14 07:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:19.000221364 +0000 UTC m=+1186.911305180" watchObservedRunningTime="2025-10-14 07:07:19.011339193 +0000 UTC m=+1186.922422999" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.073656 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c888fb598-6htl5"] Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.075537 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.087047 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.087193 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.087241 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.087275 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.089473 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-j9xcl" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.098064 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c888fb598-6htl5"] Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.107368 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.184593 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-public-tls-certs\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.184642 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-scripts\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.184675 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-config-data\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.184732 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-combined-ca-bundle\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.184757 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf9gl\" (UniqueName: \"kubernetes.io/projected/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-kube-api-access-sf9gl\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.184809 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-internal-tls-certs\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.184854 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-logs\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.287338 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-public-tls-certs\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.287579 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-scripts\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.287612 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-config-data\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.287666 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-combined-ca-bundle\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.287689 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf9gl\" (UniqueName: \"kubernetes.io/projected/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-kube-api-access-sf9gl\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.287716 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-internal-tls-certs\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.287749 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-logs\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.288110 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-logs\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.292758 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-config-data\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.293299 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-scripts\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.297394 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-public-tls-certs\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.305288 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf9gl\" (UniqueName: \"kubernetes.io/projected/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-kube-api-access-sf9gl\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.318206 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-combined-ca-bundle\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.329455 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-internal-tls-certs\") pod \"placement-5c888fb598-6htl5\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.397508 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:19 crc kubenswrapper[5058]: W1014 07:07:19.729620 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7c83841_4ff0_41a0_be22_72af1a0f2bef.slice/crio-8284104a9035afc015b8dbef1411b9bccb954ac227611a2f308b4754fbeb604c WatchSource:0}: Error finding container 8284104a9035afc015b8dbef1411b9bccb954ac227611a2f308b4754fbeb604c: Status 404 returned error can't find the container with id 8284104a9035afc015b8dbef1411b9bccb954ac227611a2f308b4754fbeb604c Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.738498 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8d77dbf7-m7vgh"] Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.886251 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c888fb598-6htl5"] Oct 14 07:07:19 crc kubenswrapper[5058]: W1014 07:07:19.899575 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3afd2a77_5f33_4fed_8b1d_5ebc565a24e9.slice/crio-226fdbc0aa3248fef52f34a0b7a3730ac8f28b94d4de423220a71b7d5137a478 WatchSource:0}: Error finding container 226fdbc0aa3248fef52f34a0b7a3730ac8f28b94d4de423220a71b7d5137a478: Status 404 returned error can't find the container with id 226fdbc0aa3248fef52f34a0b7a3730ac8f28b94d4de423220a71b7d5137a478 Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.969336 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c888fb598-6htl5" event={"ID":"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9","Type":"ContainerStarted","Data":"226fdbc0aa3248fef52f34a0b7a3730ac8f28b94d4de423220a71b7d5137a478"} Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.973346 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8d77dbf7-m7vgh" event={"ID":"b7c83841-4ff0-41a0-be22-72af1a0f2bef","Type":"ContainerStarted","Data":"65ea6b846599e21eb16e6adb81ef1c3d6526e566be9b38e51f208cad25bb0ef5"} Oct 14 07:07:19 crc kubenswrapper[5058]: I1014 07:07:19.973381 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8d77dbf7-m7vgh" event={"ID":"b7c83841-4ff0-41a0-be22-72af1a0f2bef","Type":"ContainerStarted","Data":"8284104a9035afc015b8dbef1411b9bccb954ac227611a2f308b4754fbeb604c"} Oct 14 07:07:20 crc kubenswrapper[5058]: I1014 07:07:20.988280 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c888fb598-6htl5" event={"ID":"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9","Type":"ContainerStarted","Data":"6445c8e5022085e59b4cf30be5b9f660e7e4d84d81a5dc48c1a2c637c48a4414"} Oct 14 07:07:20 crc kubenswrapper[5058]: I1014 07:07:20.992107 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8d77dbf7-m7vgh" event={"ID":"b7c83841-4ff0-41a0-be22-72af1a0f2bef","Type":"ContainerStarted","Data":"cd295ee61f567f1a94e09d2c0e675720e548cfe9f3dbd734af3e07368dc83f14"} Oct 14 07:07:20 crc kubenswrapper[5058]: I1014 07:07:20.992265 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:21 crc kubenswrapper[5058]: I1014 07:07:21.016273 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b8d77dbf7-m7vgh" podStartSLOduration=3.016252546 podStartE2EDuration="3.016252546s" podCreationTimestamp="2025-10-14 07:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:21.010202673 +0000 UTC m=+1188.921286489" watchObservedRunningTime="2025-10-14 07:07:21.016252546 +0000 UTC m=+1188.927336362" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.126784 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.127054 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.181535 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.198570 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.455491 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.456118 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.489197 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.523764 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.650319 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.689852 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-scripts\") pod \"9c90a327-1079-4375-986f-7becd72aaaad\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.689952 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv2k2\" (UniqueName: \"kubernetes.io/projected/9c90a327-1079-4375-986f-7becd72aaaad-kube-api-access-nv2k2\") pod \"9c90a327-1079-4375-986f-7becd72aaaad\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.690017 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-fernet-keys\") pod \"9c90a327-1079-4375-986f-7becd72aaaad\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.690038 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-combined-ca-bundle\") pod \"9c90a327-1079-4375-986f-7becd72aaaad\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.690063 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-credential-keys\") pod \"9c90a327-1079-4375-986f-7becd72aaaad\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.690102 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-config-data\") pod \"9c90a327-1079-4375-986f-7becd72aaaad\" (UID: \"9c90a327-1079-4375-986f-7becd72aaaad\") " Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.695206 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-scripts" (OuterVolumeSpecName: "scripts") pod "9c90a327-1079-4375-986f-7becd72aaaad" (UID: "9c90a327-1079-4375-986f-7becd72aaaad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.695672 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9c90a327-1079-4375-986f-7becd72aaaad" (UID: "9c90a327-1079-4375-986f-7becd72aaaad"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.696288 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9c90a327-1079-4375-986f-7becd72aaaad" (UID: "9c90a327-1079-4375-986f-7becd72aaaad"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.710376 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c90a327-1079-4375-986f-7becd72aaaad-kube-api-access-nv2k2" (OuterVolumeSpecName: "kube-api-access-nv2k2") pod "9c90a327-1079-4375-986f-7becd72aaaad" (UID: "9c90a327-1079-4375-986f-7becd72aaaad"). InnerVolumeSpecName "kube-api-access-nv2k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.716531 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-config-data" (OuterVolumeSpecName: "config-data") pod "9c90a327-1079-4375-986f-7becd72aaaad" (UID: "9c90a327-1079-4375-986f-7becd72aaaad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.735696 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c90a327-1079-4375-986f-7becd72aaaad" (UID: "9c90a327-1079-4375-986f-7becd72aaaad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.792993 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.793031 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv2k2\" (UniqueName: \"kubernetes.io/projected/9c90a327-1079-4375-986f-7becd72aaaad-kube-api-access-nv2k2\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.793046 5058 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.793058 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.793069 5058 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:23 crc kubenswrapper[5058]: I1014 07:07:23.793080 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c90a327-1079-4375-986f-7becd72aaaad-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.019722 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2nnd9" event={"ID":"9c90a327-1079-4375-986f-7becd72aaaad","Type":"ContainerDied","Data":"cd6714d54748b54589f4cd4f8a151d6fc090440e913276f51ab99b6f979dabaf"} Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.019780 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd6714d54748b54589f4cd4f8a151d6fc090440e913276f51ab99b6f979dabaf" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.019789 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2nnd9" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.020829 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.020911 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.020925 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.020935 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.880983 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6db45f8b6f-vkgmc"] Oct 14 07:07:24 crc kubenswrapper[5058]: E1014 07:07:24.882036 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c90a327-1079-4375-986f-7becd72aaaad" containerName="keystone-bootstrap" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.882051 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c90a327-1079-4375-986f-7becd72aaaad" containerName="keystone-bootstrap" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.882380 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c90a327-1079-4375-986f-7becd72aaaad" containerName="keystone-bootstrap" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.888497 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.894357 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.894462 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.895053 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.894495 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.894525 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.902923 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xv7zn" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.904926 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6db45f8b6f-vkgmc"] Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.922608 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-config-data\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.922672 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-credential-keys\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.922707 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-scripts\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.922740 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-public-tls-certs\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.922767 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-internal-tls-certs\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.922815 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2x4q\" (UniqueName: \"kubernetes.io/projected/ed42eec9-b3be-4971-914c-dc5d6603b0e1-kube-api-access-m2x4q\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.922860 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-combined-ca-bundle\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:24 crc kubenswrapper[5058]: I1014 07:07:24.922887 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-fernet-keys\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.024448 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-config-data\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.024502 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-credential-keys\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.024530 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-scripts\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.024554 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-public-tls-certs\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.024583 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-internal-tls-certs\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.024604 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2x4q\" (UniqueName: \"kubernetes.io/projected/ed42eec9-b3be-4971-914c-dc5d6603b0e1-kube-api-access-m2x4q\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.024635 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-combined-ca-bundle\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.024653 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-fernet-keys\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.032732 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-scripts\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.038612 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-public-tls-certs\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.039270 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-fernet-keys\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.039437 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-combined-ca-bundle\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.039817 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-config-data\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.041189 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-credential-keys\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.041586 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-internal-tls-certs\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.055047 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2x4q\" (UniqueName: \"kubernetes.io/projected/ed42eec9-b3be-4971-914c-dc5d6603b0e1-kube-api-access-m2x4q\") pod \"keystone-6db45f8b6f-vkgmc\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:25 crc kubenswrapper[5058]: I1014 07:07:25.209088 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.021357 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6db45f8b6f-vkgmc"] Oct 14 07:07:26 crc kubenswrapper[5058]: W1014 07:07:26.024071 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded42eec9_b3be_4971_914c_dc5d6603b0e1.slice/crio-ff42a3270064298d8556ae928c31c7a7516be9ff98d610b1549b11f3705b5f6e WatchSource:0}: Error finding container ff42a3270064298d8556ae928c31c7a7516be9ff98d610b1549b11f3705b5f6e: Status 404 returned error can't find the container with id ff42a3270064298d8556ae928c31c7a7516be9ff98d610b1549b11f3705b5f6e Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.083261 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c888fb598-6htl5" event={"ID":"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9","Type":"ContainerStarted","Data":"3f22606015530d5cd831d677840b6de4c02774150161b7a98b4b65f28323dbc7"} Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.083959 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.084293 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.105176 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerStarted","Data":"1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415"} Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.114964 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6db45f8b6f-vkgmc" event={"ID":"ed42eec9-b3be-4971-914c-dc5d6603b0e1","Type":"ContainerStarted","Data":"ff42a3270064298d8556ae928c31c7a7516be9ff98d610b1549b11f3705b5f6e"} Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.119111 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c888fb598-6htl5" podStartSLOduration=7.119094898 podStartE2EDuration="7.119094898s" podCreationTimestamp="2025-10-14 07:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:26.116144994 +0000 UTC m=+1194.027228800" watchObservedRunningTime="2025-10-14 07:07:26.119094898 +0000 UTC m=+1194.030178704" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.127569 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.127857 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.267965 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.296334 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.376064 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f7885f7f-gp8sj"] Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.376341 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" podUID="e42fe5a2-62f8-490d-b873-b682c849cada" containerName="dnsmasq-dns" containerID="cri-o://82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515" gracePeriod=10 Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.399209 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.399345 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 07:07:26 crc kubenswrapper[5058]: I1014 07:07:26.401279 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.021209 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.124253 5058 generic.go:334] "Generic (PLEG): container finished" podID="e42fe5a2-62f8-490d-b873-b682c849cada" containerID="82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515" exitCode=0 Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.124327 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" event={"ID":"e42fe5a2-62f8-490d-b873-b682c849cada","Type":"ContainerDied","Data":"82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515"} Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.124356 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" event={"ID":"e42fe5a2-62f8-490d-b873-b682c849cada","Type":"ContainerDied","Data":"3c6cce9de5e739cba0d9a84bed0832a25d9e835a1ec50944bca3da98c591c977"} Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.124374 5058 scope.go:117] "RemoveContainer" containerID="82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.124413 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f7885f7f-gp8sj" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.133931 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6db45f8b6f-vkgmc" event={"ID":"ed42eec9-b3be-4971-914c-dc5d6603b0e1","Type":"ContainerStarted","Data":"6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a"} Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.134569 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.149944 5058 scope.go:117] "RemoveContainer" containerID="09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.154774 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6db45f8b6f-vkgmc" podStartSLOduration=3.154753021 podStartE2EDuration="3.154753021s" podCreationTimestamp="2025-10-14 07:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:27.152926519 +0000 UTC m=+1195.064010335" watchObservedRunningTime="2025-10-14 07:07:27.154753021 +0000 UTC m=+1195.065836827" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.184091 5058 scope.go:117] "RemoveContainer" containerID="82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515" Oct 14 07:07:27 crc kubenswrapper[5058]: E1014 07:07:27.184457 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515\": container with ID starting with 82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515 not found: ID does not exist" containerID="82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.184490 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515"} err="failed to get container status \"82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515\": rpc error: code = NotFound desc = could not find container \"82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515\": container with ID starting with 82758a29acfb1ff4b4f8bd42486c74b22ffea6dcefa3612d208f042db0b00515 not found: ID does not exist" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.184521 5058 scope.go:117] "RemoveContainer" containerID="09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46" Oct 14 07:07:27 crc kubenswrapper[5058]: E1014 07:07:27.184743 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46\": container with ID starting with 09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46 not found: ID does not exist" containerID="09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.184771 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46"} err="failed to get container status \"09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46\": rpc error: code = NotFound desc = could not find container \"09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46\": container with ID starting with 09feaa333572c09f2bffb504d6c2e7be6cf467826b00c08be8d3900a83480b46 not found: ID does not exist" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.188533 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-sb\") pod \"e42fe5a2-62f8-490d-b873-b682c849cada\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.188617 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r622\" (UniqueName: \"kubernetes.io/projected/e42fe5a2-62f8-490d-b873-b682c849cada-kube-api-access-9r622\") pod \"e42fe5a2-62f8-490d-b873-b682c849cada\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.188712 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-swift-storage-0\") pod \"e42fe5a2-62f8-490d-b873-b682c849cada\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.188818 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-config\") pod \"e42fe5a2-62f8-490d-b873-b682c849cada\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.188901 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-nb\") pod \"e42fe5a2-62f8-490d-b873-b682c849cada\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.188934 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-svc\") pod \"e42fe5a2-62f8-490d-b873-b682c849cada\" (UID: \"e42fe5a2-62f8-490d-b873-b682c849cada\") " Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.201074 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42fe5a2-62f8-490d-b873-b682c849cada-kube-api-access-9r622" (OuterVolumeSpecName: "kube-api-access-9r622") pod "e42fe5a2-62f8-490d-b873-b682c849cada" (UID: "e42fe5a2-62f8-490d-b873-b682c849cada"). InnerVolumeSpecName "kube-api-access-9r622". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.259419 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e42fe5a2-62f8-490d-b873-b682c849cada" (UID: "e42fe5a2-62f8-490d-b873-b682c849cada"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.288068 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e42fe5a2-62f8-490d-b873-b682c849cada" (UID: "e42fe5a2-62f8-490d-b873-b682c849cada"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.289381 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e42fe5a2-62f8-490d-b873-b682c849cada" (UID: "e42fe5a2-62f8-490d-b873-b682c849cada"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.292141 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r622\" (UniqueName: \"kubernetes.io/projected/e42fe5a2-62f8-490d-b873-b682c849cada-kube-api-access-9r622\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.292185 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.292205 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.292223 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.298309 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e42fe5a2-62f8-490d-b873-b682c849cada" (UID: "e42fe5a2-62f8-490d-b873-b682c849cada"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.300763 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-config" (OuterVolumeSpecName: "config") pod "e42fe5a2-62f8-490d-b873-b682c849cada" (UID: "e42fe5a2-62f8-490d-b873-b682c849cada"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.395642 5058 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.395694 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e42fe5a2-62f8-490d-b873-b682c849cada-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.466046 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f7885f7f-gp8sj"] Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.477231 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77f7885f7f-gp8sj"] Oct 14 07:07:27 crc kubenswrapper[5058]: I1014 07:07:27.621315 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:28 crc kubenswrapper[5058]: I1014 07:07:28.144046 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7fblc" event={"ID":"0298834e-f248-4e99-8f2d-7807c9421143","Type":"ContainerStarted","Data":"3ea1194fa8b6fad8061269f5174e14a7965ef29f4af784db1da27697323e14ad"} Oct 14 07:07:28 crc kubenswrapper[5058]: I1014 07:07:28.148487 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8n5f9" event={"ID":"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0","Type":"ContainerStarted","Data":"a397d6458dbbd90706435ced5073577a0541def8ab0088e116d3ecae79788296"} Oct 14 07:07:28 crc kubenswrapper[5058]: I1014 07:07:28.162573 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7fblc" podStartSLOduration=2.804751322 podStartE2EDuration="36.162557795s" podCreationTimestamp="2025-10-14 07:06:52 +0000 UTC" firstStartedPulling="2025-10-14 07:06:54.294807752 +0000 UTC m=+1162.205891558" lastFinishedPulling="2025-10-14 07:07:27.652614225 +0000 UTC m=+1195.563698031" observedRunningTime="2025-10-14 07:07:28.156548743 +0000 UTC m=+1196.067632549" watchObservedRunningTime="2025-10-14 07:07:28.162557795 +0000 UTC m=+1196.073641601" Oct 14 07:07:28 crc kubenswrapper[5058]: I1014 07:07:28.176003 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8n5f9" podStartSLOduration=2.525905999 podStartE2EDuration="36.17598417s" podCreationTimestamp="2025-10-14 07:06:52 +0000 UTC" firstStartedPulling="2025-10-14 07:06:53.704005636 +0000 UTC m=+1161.615089442" lastFinishedPulling="2025-10-14 07:07:27.354083807 +0000 UTC m=+1195.265167613" observedRunningTime="2025-10-14 07:07:28.16866075 +0000 UTC m=+1196.079744576" watchObservedRunningTime="2025-10-14 07:07:28.17598417 +0000 UTC m=+1196.087067966" Oct 14 07:07:28 crc kubenswrapper[5058]: I1014 07:07:28.813976 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42fe5a2-62f8-490d-b873-b682c849cada" path="/var/lib/kubelet/pods/e42fe5a2-62f8-490d-b873-b682c849cada/volumes" Oct 14 07:07:31 crc kubenswrapper[5058]: I1014 07:07:31.187368 5058 generic.go:334] "Generic (PLEG): container finished" podID="0298834e-f248-4e99-8f2d-7807c9421143" containerID="3ea1194fa8b6fad8061269f5174e14a7965ef29f4af784db1da27697323e14ad" exitCode=0 Oct 14 07:07:31 crc kubenswrapper[5058]: I1014 07:07:31.187450 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7fblc" event={"ID":"0298834e-f248-4e99-8f2d-7807c9421143","Type":"ContainerDied","Data":"3ea1194fa8b6fad8061269f5174e14a7965ef29f4af784db1da27697323e14ad"} Oct 14 07:07:32 crc kubenswrapper[5058]: I1014 07:07:32.202023 5058 generic.go:334] "Generic (PLEG): container finished" podID="74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" containerID="a397d6458dbbd90706435ced5073577a0541def8ab0088e116d3ecae79788296" exitCode=0 Oct 14 07:07:32 crc kubenswrapper[5058]: I1014 07:07:32.202114 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8n5f9" event={"ID":"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0","Type":"ContainerDied","Data":"a397d6458dbbd90706435ced5073577a0541def8ab0088e116d3ecae79788296"} Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.515443 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7fblc" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.621096 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rg8\" (UniqueName: \"kubernetes.io/projected/0298834e-f248-4e99-8f2d-7807c9421143-kube-api-access-w9rg8\") pod \"0298834e-f248-4e99-8f2d-7807c9421143\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.621836 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-db-sync-config-data\") pod \"0298834e-f248-4e99-8f2d-7807c9421143\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.622112 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-combined-ca-bundle\") pod \"0298834e-f248-4e99-8f2d-7807c9421143\" (UID: \"0298834e-f248-4e99-8f2d-7807c9421143\") " Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.625304 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0298834e-f248-4e99-8f2d-7807c9421143" (UID: "0298834e-f248-4e99-8f2d-7807c9421143"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.625469 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0298834e-f248-4e99-8f2d-7807c9421143-kube-api-access-w9rg8" (OuterVolumeSpecName: "kube-api-access-w9rg8") pod "0298834e-f248-4e99-8f2d-7807c9421143" (UID: "0298834e-f248-4e99-8f2d-7807c9421143"). InnerVolumeSpecName "kube-api-access-w9rg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.655480 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0298834e-f248-4e99-8f2d-7807c9421143" (UID: "0298834e-f248-4e99-8f2d-7807c9421143"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.675937 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.733040 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.733079 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rg8\" (UniqueName: \"kubernetes.io/projected/0298834e-f248-4e99-8f2d-7807c9421143-kube-api-access-w9rg8\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.733091 5058 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0298834e-f248-4e99-8f2d-7807c9421143-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.833885 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-db-sync-config-data\") pod \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.833957 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-etc-machine-id\") pod \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.833988 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-config-data\") pod \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.834096 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-scripts\") pod \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.834266 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5bqp\" (UniqueName: \"kubernetes.io/projected/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-kube-api-access-m5bqp\") pod \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.834298 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-combined-ca-bundle\") pod \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\" (UID: \"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0\") " Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.834314 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" (UID: "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.834688 5058 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.837667 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-scripts" (OuterVolumeSpecName: "scripts") pod "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" (UID: "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.840277 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" (UID: "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.841162 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-kube-api-access-m5bqp" (OuterVolumeSpecName: "kube-api-access-m5bqp") pod "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" (UID: "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0"). InnerVolumeSpecName "kube-api-access-m5bqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.859089 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" (UID: "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.878235 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-config-data" (OuterVolumeSpecName: "config-data") pod "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" (UID: "74e6ff86-f483-424b-9f6d-5f6d0c1e81b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.936903 5058 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.936944 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.936955 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.936968 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5bqp\" (UniqueName: \"kubernetes.io/projected/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-kube-api-access-m5bqp\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:33 crc kubenswrapper[5058]: I1014 07:07:33.936980 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.224653 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerStarted","Data":"3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a"} Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.224783 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.225001 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="ceilometer-central-agent" containerID="cri-o://939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274" gracePeriod=30 Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.225029 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="sg-core" containerID="cri-o://1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415" gracePeriod=30 Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.225050 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="ceilometer-notification-agent" containerID="cri-o://7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb" gracePeriod=30 Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.225039 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="proxy-httpd" containerID="cri-o://3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a" gracePeriod=30 Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.234094 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8n5f9" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.234094 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8n5f9" event={"ID":"74e6ff86-f483-424b-9f6d-5f6d0c1e81b0","Type":"ContainerDied","Data":"cb9dabed246e983ecc40863211f250f08e6419ab6286e89778a2acf7d9c82a0b"} Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.234261 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb9dabed246e983ecc40863211f250f08e6419ab6286e89778a2acf7d9c82a0b" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.237736 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7fblc" event={"ID":"0298834e-f248-4e99-8f2d-7807c9421143","Type":"ContainerDied","Data":"c9c6801cfdfa693dcbad2415f1615eb58e26da24827b6d2c3a44eaf73015e100"} Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.237778 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c6801cfdfa693dcbad2415f1615eb58e26da24827b6d2c3a44eaf73015e100" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.238032 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7fblc" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.278689 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.106292487 podStartE2EDuration="42.278672391s" podCreationTimestamp="2025-10-14 07:06:52 +0000 UTC" firstStartedPulling="2025-10-14 07:06:54.346604355 +0000 UTC m=+1162.257688161" lastFinishedPulling="2025-10-14 07:07:33.518984259 +0000 UTC m=+1201.430068065" observedRunningTime="2025-10-14 07:07:34.25874654 +0000 UTC m=+1202.169830376" watchObservedRunningTime="2025-10-14 07:07:34.278672391 +0000 UTC m=+1202.189756197" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.517695 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:07:34 crc kubenswrapper[5058]: E1014 07:07:34.518469 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" containerName="cinder-db-sync" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.518484 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" containerName="cinder-db-sync" Oct 14 07:07:34 crc kubenswrapper[5058]: E1014 07:07:34.518498 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0298834e-f248-4e99-8f2d-7807c9421143" containerName="barbican-db-sync" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.518506 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0298834e-f248-4e99-8f2d-7807c9421143" containerName="barbican-db-sync" Oct 14 07:07:34 crc kubenswrapper[5058]: E1014 07:07:34.518541 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42fe5a2-62f8-490d-b873-b682c849cada" containerName="init" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.518549 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42fe5a2-62f8-490d-b873-b682c849cada" containerName="init" Oct 14 07:07:34 crc kubenswrapper[5058]: E1014 07:07:34.518561 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42fe5a2-62f8-490d-b873-b682c849cada" containerName="dnsmasq-dns" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.518568 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42fe5a2-62f8-490d-b873-b682c849cada" containerName="dnsmasq-dns" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.518772 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0298834e-f248-4e99-8f2d-7807c9421143" containerName="barbican-db-sync" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.518785 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42fe5a2-62f8-490d-b873-b682c849cada" containerName="dnsmasq-dns" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.518804 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" containerName="cinder-db-sync" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.519913 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.522341 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.522507 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.522618 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.522748 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jd7h2" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.539110 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.584529 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557fb4cfc9-7b29s"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.585940 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.606588 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557fb4cfc9-7b29s"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650244 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650316 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-sb\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650337 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650371 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-nb\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650393 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650470 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-swift-storage-0\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650486 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c91e794b-6511-467d-a93a-925b4a1d2756-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650522 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-config\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650612 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-svc\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650640 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4l9\" (UniqueName: \"kubernetes.io/projected/c91e794b-6511-467d-a93a-925b4a1d2756-kube-api-access-qb4l9\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650682 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-scripts\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.650701 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dslpl\" (UniqueName: \"kubernetes.io/projected/5c15eed4-506e-4ab5-88b3-35349a96885d-kube-api-access-dslpl\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.723196 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-76589ffb55-8wb6q"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.724971 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.733226 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.733628 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.733905 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cxc4b" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.737603 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.747471 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.749548 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.752924 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4l9\" (UniqueName: \"kubernetes.io/projected/c91e794b-6511-467d-a93a-925b4a1d2756-kube-api-access-qb4l9\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.752983 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-scripts\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753017 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dslpl\" (UniqueName: \"kubernetes.io/projected/5c15eed4-506e-4ab5-88b3-35349a96885d-kube-api-access-dslpl\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753050 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753080 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-sb\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753103 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753129 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-nb\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753157 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753235 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-swift-storage-0\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753258 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c91e794b-6511-467d-a93a-925b4a1d2756-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753283 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-config\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753359 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-svc\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.753888 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c91e794b-6511-467d-a93a-925b4a1d2756-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.754477 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-svc\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.757934 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-swift-storage-0\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.764510 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-config\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.773976 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-sb\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.774535 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-nb\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.780433 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.780836 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.780910 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76589ffb55-8wb6q"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.792635 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-scripts\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.804239 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.817266 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4l9\" (UniqueName: \"kubernetes.io/projected/c91e794b-6511-467d-a93a-925b4a1d2756-kube-api-access-qb4l9\") pod \"cinder-scheduler-0\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.826358 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.853223 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.854388 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dslpl\" (UniqueName: \"kubernetes.io/projected/5c15eed4-506e-4ab5-88b3-35349a96885d-kube-api-access-dslpl\") pod \"dnsmasq-dns-557fb4cfc9-7b29s\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.854666 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvlsk\" (UniqueName: \"kubernetes.io/projected/ef71f1de-0039-4574-955a-ce760eb7ea3e-kube-api-access-pvlsk\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.854719 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9tnz\" (UniqueName: \"kubernetes.io/projected/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-kube-api-access-z9tnz\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.854741 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-logs\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.854771 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-combined-ca-bundle\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.854875 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.854909 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data-custom\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.854963 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.854989 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data-custom\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.855008 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef71f1de-0039-4574-955a-ce760eb7ea3e-logs\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.855047 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-combined-ca-bundle\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.882935 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557fb4cfc9-7b29s"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.884540 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.902529 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.913289 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.913395 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.924113 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb9f44c77-2q9fb"] Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.924565 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.933567 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.961945 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.961991 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data-custom\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.962037 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.962055 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data-custom\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.962072 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef71f1de-0039-4574-955a-ce760eb7ea3e-logs\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.962101 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-combined-ca-bundle\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.962127 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvlsk\" (UniqueName: \"kubernetes.io/projected/ef71f1de-0039-4574-955a-ce760eb7ea3e-kube-api-access-pvlsk\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.962160 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9tnz\" (UniqueName: \"kubernetes.io/projected/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-kube-api-access-z9tnz\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.962176 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-logs\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.962195 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-combined-ca-bundle\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.962653 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef71f1de-0039-4574-955a-ce760eb7ea3e-logs\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.967121 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data-custom\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.972192 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-logs\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:34 crc kubenswrapper[5058]: I1014 07:07:34.985148 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-combined-ca-bundle\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.031321 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9tnz\" (UniqueName: \"kubernetes.io/projected/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-kube-api-access-z9tnz\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.031698 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data-custom\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.034469 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-combined-ca-bundle\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.035185 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data\") pod \"barbican-worker-76589ffb55-8wb6q\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.035240 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb9f44c77-2q9fb"] Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.046039 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.055853 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvlsk\" (UniqueName: \"kubernetes.io/projected/ef71f1de-0039-4574-955a-ce760eb7ea3e-kube-api-access-pvlsk\") pod \"barbican-keystone-listener-6dd6ff5ccd-fdrls\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.063761 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-sb\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.063798 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.063887 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-swift-storage-0\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.063977 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.064022 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss4rq\" (UniqueName: \"kubernetes.io/projected/35000b48-e362-4178-95ad-648a5495d7a6-kube-api-access-ss4rq\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.064047 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.064069 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-nb\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.064089 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35000b48-e362-4178-95ad-648a5495d7a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.064124 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-svc\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.064147 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-config\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.064168 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-scripts\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.064205 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35000b48-e362-4178-95ad-648a5495d7a6-logs\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.064251 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79xk2\" (UniqueName: \"kubernetes.io/projected/f5850000-d8e3-4fc4-ae68-180280760d79-kube-api-access-79xk2\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.086582 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74dcb85b56-gfnsw"] Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.088554 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.091025 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.120742 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74dcb85b56-gfnsw"] Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172270 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172342 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gx7c\" (UniqueName: \"kubernetes.io/projected/918cfbce-acfd-4728-927c-548ee1960b8d-kube-api-access-4gx7c\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172396 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss4rq\" (UniqueName: \"kubernetes.io/projected/35000b48-e362-4178-95ad-648a5495d7a6-kube-api-access-ss4rq\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172424 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172467 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-nb\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172493 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35000b48-e362-4178-95ad-648a5495d7a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172545 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-svc\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172575 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-config\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172623 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-scripts\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172659 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35000b48-e362-4178-95ad-648a5495d7a6-logs\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172705 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79xk2\" (UniqueName: \"kubernetes.io/projected/f5850000-d8e3-4fc4-ae68-180280760d79-kube-api-access-79xk2\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172751 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-sb\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172803 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172834 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-combined-ca-bundle\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172897 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-swift-storage-0\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172919 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.172971 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918cfbce-acfd-4728-927c-548ee1960b8d-logs\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.173024 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data-custom\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.175872 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-nb\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.175919 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35000b48-e362-4178-95ad-648a5495d7a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.177344 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35000b48-e362-4178-95ad-648a5495d7a6-logs\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.178759 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-swift-storage-0\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.178869 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-svc\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.180101 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.181406 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.181749 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-sb\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.182829 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.187731 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-config\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.190455 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-scripts\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.194337 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79xk2\" (UniqueName: \"kubernetes.io/projected/f5850000-d8e3-4fc4-ae68-180280760d79-kube-api-access-79xk2\") pod \"dnsmasq-dns-cb9f44c77-2q9fb\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.197576 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss4rq\" (UniqueName: \"kubernetes.io/projected/35000b48-e362-4178-95ad-648a5495d7a6-kube-api-access-ss4rq\") pod \"cinder-api-0\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.211171 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.232114 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.265720 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.270149 5058 generic.go:334] "Generic (PLEG): container finished" podID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerID="3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a" exitCode=0 Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.270182 5058 generic.go:334] "Generic (PLEG): container finished" podID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerID="1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415" exitCode=2 Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.270193 5058 generic.go:334] "Generic (PLEG): container finished" podID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerID="939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274" exitCode=0 Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.270216 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerDied","Data":"3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a"} Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.270244 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerDied","Data":"1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415"} Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.270259 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerDied","Data":"939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274"} Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.275138 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gx7c\" (UniqueName: \"kubernetes.io/projected/918cfbce-acfd-4728-927c-548ee1960b8d-kube-api-access-4gx7c\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.275269 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-combined-ca-bundle\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.275315 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.275347 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918cfbce-acfd-4728-927c-548ee1960b8d-logs\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.275379 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data-custom\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.276347 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918cfbce-acfd-4728-927c-548ee1960b8d-logs\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.280830 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-combined-ca-bundle\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.282403 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data-custom\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.283409 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.291831 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gx7c\" (UniqueName: \"kubernetes.io/projected/918cfbce-acfd-4728-927c-548ee1960b8d-kube-api-access-4gx7c\") pod \"barbican-api-74dcb85b56-gfnsw\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.292845 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.423959 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.478404 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.494327 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557fb4cfc9-7b29s"] Oct 14 07:07:35 crc kubenswrapper[5058]: W1014 07:07:35.502129 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc91e794b_6511_467d_a93a_925b4a1d2756.slice/crio-8cf8f3a694f1eb364a65ef89975bf25b3387cd638f20872e144950c110436e2c WatchSource:0}: Error finding container 8cf8f3a694f1eb364a65ef89975bf25b3387cd638f20872e144950c110436e2c: Status 404 returned error can't find the container with id 8cf8f3a694f1eb364a65ef89975bf25b3387cd638f20872e144950c110436e2c Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.679605 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76589ffb55-8wb6q"] Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.783538 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls"] Oct 14 07:07:35 crc kubenswrapper[5058]: W1014 07:07:35.821242 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef71f1de_0039_4574_955a_ce760eb7ea3e.slice/crio-8c645d4039f968d2d008a05dd294a7202643f7f844429ba9fed91241010ec0f7 WatchSource:0}: Error finding container 8c645d4039f968d2d008a05dd294a7202643f7f844429ba9fed91241010ec0f7: Status 404 returned error can't find the container with id 8c645d4039f968d2d008a05dd294a7202643f7f844429ba9fed91241010ec0f7 Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.907619 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.968976 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74dcb85b56-gfnsw"] Oct 14 07:07:35 crc kubenswrapper[5058]: I1014 07:07:35.977755 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb9f44c77-2q9fb"] Oct 14 07:07:35 crc kubenswrapper[5058]: W1014 07:07:35.977915 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5850000_d8e3_4fc4_ae68_180280760d79.slice/crio-81abeccb667e00722161383fbb358b33fcb1c7d32ac4bf967e5d25c87493922f WatchSource:0}: Error finding container 81abeccb667e00722161383fbb358b33fcb1c7d32ac4bf967e5d25c87493922f: Status 404 returned error can't find the container with id 81abeccb667e00722161383fbb358b33fcb1c7d32ac4bf967e5d25c87493922f Oct 14 07:07:35 crc kubenswrapper[5058]: W1014 07:07:35.980645 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod918cfbce_acfd_4728_927c_548ee1960b8d.slice/crio-875df13a1207c797d663fb0a08daeee6031678452ac6e0f196288680140d066f WatchSource:0}: Error finding container 875df13a1207c797d663fb0a08daeee6031678452ac6e0f196288680140d066f: Status 404 returned error can't find the container with id 875df13a1207c797d663fb0a08daeee6031678452ac6e0f196288680140d066f Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.279931 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"35000b48-e362-4178-95ad-648a5495d7a6","Type":"ContainerStarted","Data":"015830e976db4d696aff76647ecf2250b8c42dab473592559b1e23c2e8e13441"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.282158 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dcb85b56-gfnsw" event={"ID":"918cfbce-acfd-4728-927c-548ee1960b8d","Type":"ContainerStarted","Data":"8e1f702490aeef4623164cb3994e0fb31c9d8f6e1c4729188196b4aa0d6853db"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.282193 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dcb85b56-gfnsw" event={"ID":"918cfbce-acfd-4728-927c-548ee1960b8d","Type":"ContainerStarted","Data":"875df13a1207c797d663fb0a08daeee6031678452ac6e0f196288680140d066f"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.283408 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c91e794b-6511-467d-a93a-925b4a1d2756","Type":"ContainerStarted","Data":"8cf8f3a694f1eb364a65ef89975bf25b3387cd638f20872e144950c110436e2c"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.284875 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" event={"ID":"ef71f1de-0039-4574-955a-ce760eb7ea3e","Type":"ContainerStarted","Data":"8c645d4039f968d2d008a05dd294a7202643f7f844429ba9fed91241010ec0f7"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.286268 5058 generic.go:334] "Generic (PLEG): container finished" podID="5c15eed4-506e-4ab5-88b3-35349a96885d" containerID="81d33a3425677c70ef6ad6dd4f10fe768ebf19e211c0a1825c8eba02ba916adc" exitCode=0 Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.286319 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" event={"ID":"5c15eed4-506e-4ab5-88b3-35349a96885d","Type":"ContainerDied","Data":"81d33a3425677c70ef6ad6dd4f10fe768ebf19e211c0a1825c8eba02ba916adc"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.286342 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" event={"ID":"5c15eed4-506e-4ab5-88b3-35349a96885d","Type":"ContainerStarted","Data":"ca49f5727063235c70d97b6d5d787b20d02643daa6337317868446a9c6889b50"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.291431 5058 generic.go:334] "Generic (PLEG): container finished" podID="f5850000-d8e3-4fc4-ae68-180280760d79" containerID="eb5db8b46c6650d243b0539e445eda0e01178b6c8fe4208fb52d0dd48872f6cf" exitCode=0 Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.291482 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" event={"ID":"f5850000-d8e3-4fc4-ae68-180280760d79","Type":"ContainerDied","Data":"eb5db8b46c6650d243b0539e445eda0e01178b6c8fe4208fb52d0dd48872f6cf"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.291502 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" event={"ID":"f5850000-d8e3-4fc4-ae68-180280760d79","Type":"ContainerStarted","Data":"81abeccb667e00722161383fbb358b33fcb1c7d32ac4bf967e5d25c87493922f"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.295623 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76589ffb55-8wb6q" event={"ID":"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27","Type":"ContainerStarted","Data":"d5a7cac8ab393d5b51b1667ad2a872c78c0dea2f4a2a10366886f45852b6b6b7"} Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.769487 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.799046 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dslpl\" (UniqueName: \"kubernetes.io/projected/5c15eed4-506e-4ab5-88b3-35349a96885d-kube-api-access-dslpl\") pod \"5c15eed4-506e-4ab5-88b3-35349a96885d\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.799163 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-svc\") pod \"5c15eed4-506e-4ab5-88b3-35349a96885d\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.799219 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-config\") pod \"5c15eed4-506e-4ab5-88b3-35349a96885d\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.799277 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-nb\") pod \"5c15eed4-506e-4ab5-88b3-35349a96885d\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.799320 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-swift-storage-0\") pod \"5c15eed4-506e-4ab5-88b3-35349a96885d\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.799412 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-sb\") pod \"5c15eed4-506e-4ab5-88b3-35349a96885d\" (UID: \"5c15eed4-506e-4ab5-88b3-35349a96885d\") " Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.806187 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c15eed4-506e-4ab5-88b3-35349a96885d-kube-api-access-dslpl" (OuterVolumeSpecName: "kube-api-access-dslpl") pod "5c15eed4-506e-4ab5-88b3-35349a96885d" (UID: "5c15eed4-506e-4ab5-88b3-35349a96885d"). InnerVolumeSpecName "kube-api-access-dslpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.845300 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-config" (OuterVolumeSpecName: "config") pod "5c15eed4-506e-4ab5-88b3-35349a96885d" (UID: "5c15eed4-506e-4ab5-88b3-35349a96885d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.847298 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5c15eed4-506e-4ab5-88b3-35349a96885d" (UID: "5c15eed4-506e-4ab5-88b3-35349a96885d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.848510 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c15eed4-506e-4ab5-88b3-35349a96885d" (UID: "5c15eed4-506e-4ab5-88b3-35349a96885d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.849892 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c15eed4-506e-4ab5-88b3-35349a96885d" (UID: "5c15eed4-506e-4ab5-88b3-35349a96885d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.854956 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c15eed4-506e-4ab5-88b3-35349a96885d" (UID: "5c15eed4-506e-4ab5-88b3-35349a96885d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.905318 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.905362 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.905378 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.905391 5058 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.905405 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c15eed4-506e-4ab5-88b3-35349a96885d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:36 crc kubenswrapper[5058]: I1014 07:07:36.905417 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dslpl\" (UniqueName: \"kubernetes.io/projected/5c15eed4-506e-4ab5-88b3-35349a96885d-kube-api-access-dslpl\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.033906 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.333010 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"35000b48-e362-4178-95ad-648a5495d7a6","Type":"ContainerStarted","Data":"1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729"} Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.335874 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dcb85b56-gfnsw" event={"ID":"918cfbce-acfd-4728-927c-548ee1960b8d","Type":"ContainerStarted","Data":"141b30bf55fa76ef2c981955c4336c7e83f02fae4881de9365c56fb60e038785"} Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.337282 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.337311 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.344466 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" event={"ID":"5c15eed4-506e-4ab5-88b3-35349a96885d","Type":"ContainerDied","Data":"ca49f5727063235c70d97b6d5d787b20d02643daa6337317868446a9c6889b50"} Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.344516 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557fb4cfc9-7b29s" Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.344543 5058 scope.go:117] "RemoveContainer" containerID="81d33a3425677c70ef6ad6dd4f10fe768ebf19e211c0a1825c8eba02ba916adc" Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.352394 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" event={"ID":"f5850000-d8e3-4fc4-ae68-180280760d79","Type":"ContainerStarted","Data":"1545e5441d594a102205d61027ae292f75ddd746f5bd8cfb20d42ef0778bf558"} Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.352615 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.362627 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74dcb85b56-gfnsw" podStartSLOduration=3.3626088579999998 podStartE2EDuration="3.362608858s" podCreationTimestamp="2025-10-14 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:37.355103083 +0000 UTC m=+1205.266186899" watchObservedRunningTime="2025-10-14 07:07:37.362608858 +0000 UTC m=+1205.273692664" Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.387168 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" podStartSLOduration=3.387151221 podStartE2EDuration="3.387151221s" podCreationTimestamp="2025-10-14 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:37.381993813 +0000 UTC m=+1205.293077639" watchObservedRunningTime="2025-10-14 07:07:37.387151221 +0000 UTC m=+1205.298235017" Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.492265 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557fb4cfc9-7b29s"] Oct 14 07:07:37 crc kubenswrapper[5058]: I1014 07:07:37.519459 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557fb4cfc9-7b29s"] Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.380583 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"35000b48-e362-4178-95ad-648a5495d7a6","Type":"ContainerStarted","Data":"2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39"} Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.381609 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="35000b48-e362-4178-95ad-648a5495d7a6" containerName="cinder-api-log" containerID="cri-o://1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729" gracePeriod=30 Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.381643 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.381907 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="35000b48-e362-4178-95ad-648a5495d7a6" containerName="cinder-api" containerID="cri-o://2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39" gracePeriod=30 Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.391022 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c91e794b-6511-467d-a93a-925b4a1d2756","Type":"ContainerStarted","Data":"865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007"} Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.428940 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.428913937 podStartE2EDuration="4.428913937s" podCreationTimestamp="2025-10-14 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:38.408851303 +0000 UTC m=+1206.319935119" watchObservedRunningTime="2025-10-14 07:07:38.428913937 +0000 UTC m=+1206.339997773" Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.837358 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c15eed4-506e-4ab5-88b3-35349a96885d" path="/var/lib/kubelet/pods/5c15eed4-506e-4ab5-88b3-35349a96885d/volumes" Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.928602 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.981894 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data\") pod \"35000b48-e362-4178-95ad-648a5495d7a6\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.982028 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35000b48-e362-4178-95ad-648a5495d7a6-logs\") pod \"35000b48-e362-4178-95ad-648a5495d7a6\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.982770 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35000b48-e362-4178-95ad-648a5495d7a6-logs" (OuterVolumeSpecName: "logs") pod "35000b48-e362-4178-95ad-648a5495d7a6" (UID: "35000b48-e362-4178-95ad-648a5495d7a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.982846 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35000b48-e362-4178-95ad-648a5495d7a6-etc-machine-id\") pod \"35000b48-e362-4178-95ad-648a5495d7a6\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.982908 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35000b48-e362-4178-95ad-648a5495d7a6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "35000b48-e362-4178-95ad-648a5495d7a6" (UID: "35000b48-e362-4178-95ad-648a5495d7a6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.982944 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss4rq\" (UniqueName: \"kubernetes.io/projected/35000b48-e362-4178-95ad-648a5495d7a6-kube-api-access-ss4rq\") pod \"35000b48-e362-4178-95ad-648a5495d7a6\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.982967 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-combined-ca-bundle\") pod \"35000b48-e362-4178-95ad-648a5495d7a6\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.983000 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-scripts\") pod \"35000b48-e362-4178-95ad-648a5495d7a6\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.983369 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data-custom\") pod \"35000b48-e362-4178-95ad-648a5495d7a6\" (UID: \"35000b48-e362-4178-95ad-648a5495d7a6\") " Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.983995 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35000b48-e362-4178-95ad-648a5495d7a6-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.984007 5058 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35000b48-e362-4178-95ad-648a5495d7a6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.986054 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-scripts" (OuterVolumeSpecName: "scripts") pod "35000b48-e362-4178-95ad-648a5495d7a6" (UID: "35000b48-e362-4178-95ad-648a5495d7a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:38 crc kubenswrapper[5058]: I1014 07:07:38.994748 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "35000b48-e362-4178-95ad-648a5495d7a6" (UID: "35000b48-e362-4178-95ad-648a5495d7a6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.001482 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35000b48-e362-4178-95ad-648a5495d7a6-kube-api-access-ss4rq" (OuterVolumeSpecName: "kube-api-access-ss4rq") pod "35000b48-e362-4178-95ad-648a5495d7a6" (UID: "35000b48-e362-4178-95ad-648a5495d7a6"). InnerVolumeSpecName "kube-api-access-ss4rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.019307 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35000b48-e362-4178-95ad-648a5495d7a6" (UID: "35000b48-e362-4178-95ad-648a5495d7a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.037687 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data" (OuterVolumeSpecName: "config-data") pod "35000b48-e362-4178-95ad-648a5495d7a6" (UID: "35000b48-e362-4178-95ad-648a5495d7a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.085789 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.085839 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.085848 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss4rq\" (UniqueName: \"kubernetes.io/projected/35000b48-e362-4178-95ad-648a5495d7a6-kube-api-access-ss4rq\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.085858 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.085866 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35000b48-e362-4178-95ad-648a5495d7a6-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.394851 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.405636 5058 generic.go:334] "Generic (PLEG): container finished" podID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerID="7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb" exitCode=0 Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.405706 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerDied","Data":"7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.405736 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f5a513c-30a3-4221-a451-ec25eeb46c4e","Type":"ContainerDied","Data":"2cef869a7234db04b263fa4455919913d82ee38ac53600381600a4e42608a42f"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.405756 5058 scope.go:117] "RemoveContainer" containerID="3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.405905 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.418466 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" event={"ID":"ef71f1de-0039-4574-955a-ce760eb7ea3e","Type":"ContainerStarted","Data":"3f1df96bec469d5c2518c3516461fee935342a6eb2f6de0d71a00e3cd6a36abf"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.418510 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" event={"ID":"ef71f1de-0039-4574-955a-ce760eb7ea3e","Type":"ContainerStarted","Data":"9ee6ab347aa76c2029fca9b728228a4e733b2d2b1b8c42b5e8bdb632ac615717"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.430157 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76589ffb55-8wb6q" event={"ID":"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27","Type":"ContainerStarted","Data":"9b3da1831fb76f980056107a8a4364388de6c4f35dd0467a86ca43e04eafed44"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.430195 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76589ffb55-8wb6q" event={"ID":"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27","Type":"ContainerStarted","Data":"d8be8bca483382edada5af0684b9c15ed30003719e5ed4f96a64a29209b1930d"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.449142 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" podStartSLOduration=2.970180822 podStartE2EDuration="5.449120278s" podCreationTimestamp="2025-10-14 07:07:34 +0000 UTC" firstStartedPulling="2025-10-14 07:07:35.824051426 +0000 UTC m=+1203.735135232" lastFinishedPulling="2025-10-14 07:07:38.302990852 +0000 UTC m=+1206.214074688" observedRunningTime="2025-10-14 07:07:39.440206032 +0000 UTC m=+1207.351289838" watchObservedRunningTime="2025-10-14 07:07:39.449120278 +0000 UTC m=+1207.360204084" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.455035 5058 generic.go:334] "Generic (PLEG): container finished" podID="35000b48-e362-4178-95ad-648a5495d7a6" containerID="2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39" exitCode=0 Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.455069 5058 generic.go:334] "Generic (PLEG): container finished" podID="35000b48-e362-4178-95ad-648a5495d7a6" containerID="1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729" exitCode=143 Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.455195 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.456107 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"35000b48-e362-4178-95ad-648a5495d7a6","Type":"ContainerDied","Data":"2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.456148 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"35000b48-e362-4178-95ad-648a5495d7a6","Type":"ContainerDied","Data":"1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.456167 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"35000b48-e362-4178-95ad-648a5495d7a6","Type":"ContainerDied","Data":"015830e976db4d696aff76647ecf2250b8c42dab473592559b1e23c2e8e13441"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.462879 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c91e794b-6511-467d-a93a-925b4a1d2756","Type":"ContainerStarted","Data":"add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86"} Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.471008 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-76589ffb55-8wb6q" podStartSLOduration=2.8766224339999997 podStartE2EDuration="5.470986444s" podCreationTimestamp="2025-10-14 07:07:34 +0000 UTC" firstStartedPulling="2025-10-14 07:07:35.707380436 +0000 UTC m=+1203.618464242" lastFinishedPulling="2025-10-14 07:07:38.301744406 +0000 UTC m=+1206.212828252" observedRunningTime="2025-10-14 07:07:39.454986325 +0000 UTC m=+1207.366070131" watchObservedRunningTime="2025-10-14 07:07:39.470986444 +0000 UTC m=+1207.382070250" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.478918 5058 scope.go:117] "RemoveContainer" containerID="1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.491428 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.362933798 podStartE2EDuration="5.491408958s" podCreationTimestamp="2025-10-14 07:07:34 +0000 UTC" firstStartedPulling="2025-10-14 07:07:35.504103246 +0000 UTC m=+1203.415187052" lastFinishedPulling="2025-10-14 07:07:36.632578416 +0000 UTC m=+1204.543662212" observedRunningTime="2025-10-14 07:07:39.488398822 +0000 UTC m=+1207.399482628" watchObservedRunningTime="2025-10-14 07:07:39.491408958 +0000 UTC m=+1207.402492764" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.492324 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-combined-ca-bundle\") pod \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.492406 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-sg-core-conf-yaml\") pod \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.492468 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjt4f\" (UniqueName: \"kubernetes.io/projected/4f5a513c-30a3-4221-a451-ec25eeb46c4e-kube-api-access-fjt4f\") pod \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.492513 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-scripts\") pod \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.492623 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-log-httpd\") pod \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.492662 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-config-data\") pod \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.492679 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-run-httpd\") pod \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\" (UID: \"4f5a513c-30a3-4221-a451-ec25eeb46c4e\") " Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.495366 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4f5a513c-30a3-4221-a451-ec25eeb46c4e" (UID: "4f5a513c-30a3-4221-a451-ec25eeb46c4e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.497361 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4f5a513c-30a3-4221-a451-ec25eeb46c4e" (UID: "4f5a513c-30a3-4221-a451-ec25eeb46c4e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.500006 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-scripts" (OuterVolumeSpecName: "scripts") pod "4f5a513c-30a3-4221-a451-ec25eeb46c4e" (UID: "4f5a513c-30a3-4221-a451-ec25eeb46c4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.521147 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5a513c-30a3-4221-a451-ec25eeb46c4e-kube-api-access-fjt4f" (OuterVolumeSpecName: "kube-api-access-fjt4f") pod "4f5a513c-30a3-4221-a451-ec25eeb46c4e" (UID: "4f5a513c-30a3-4221-a451-ec25eeb46c4e"). InnerVolumeSpecName "kube-api-access-fjt4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.531902 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4f5a513c-30a3-4221-a451-ec25eeb46c4e" (UID: "4f5a513c-30a3-4221-a451-ec25eeb46c4e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.590251 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f5a513c-30a3-4221-a451-ec25eeb46c4e" (UID: "4f5a513c-30a3-4221-a451-ec25eeb46c4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.594762 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjt4f\" (UniqueName: \"kubernetes.io/projected/4f5a513c-30a3-4221-a451-ec25eeb46c4e-kube-api-access-fjt4f\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.594830 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.594840 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.594867 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f5a513c-30a3-4221-a451-ec25eeb46c4e-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.594884 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.594892 5058 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.641941 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-config-data" (OuterVolumeSpecName: "config-data") pod "4f5a513c-30a3-4221-a451-ec25eeb46c4e" (UID: "4f5a513c-30a3-4221-a451-ec25eeb46c4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.690831 5058 scope.go:117] "RemoveContainer" containerID="7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.695153 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.696182 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5a513c-30a3-4221-a451-ec25eeb46c4e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.702767 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.720877 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.721340 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35000b48-e362-4178-95ad-648a5495d7a6" containerName="cinder-api" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721356 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="35000b48-e362-4178-95ad-648a5495d7a6" containerName="cinder-api" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.721374 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="proxy-httpd" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721381 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="proxy-httpd" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.721407 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35000b48-e362-4178-95ad-648a5495d7a6" containerName="cinder-api-log" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721417 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="35000b48-e362-4178-95ad-648a5495d7a6" containerName="cinder-api-log" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.721439 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="sg-core" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721446 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="sg-core" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.721455 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="ceilometer-notification-agent" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721463 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="ceilometer-notification-agent" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.721480 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c15eed4-506e-4ab5-88b3-35349a96885d" containerName="init" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721488 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c15eed4-506e-4ab5-88b3-35349a96885d" containerName="init" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.721506 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="ceilometer-central-agent" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721514 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="ceilometer-central-agent" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721677 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="35000b48-e362-4178-95ad-648a5495d7a6" containerName="cinder-api" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721691 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="35000b48-e362-4178-95ad-648a5495d7a6" containerName="cinder-api-log" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721704 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="ceilometer-central-agent" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721713 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="sg-core" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721720 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="ceilometer-notification-agent" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721731 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c15eed4-506e-4ab5-88b3-35349a96885d" containerName="init" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.721740 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" containerName="proxy-httpd" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.726366 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.730211 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.731681 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.731900 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.752013 5058 scope.go:117] "RemoveContainer" containerID="939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.755784 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.777888 5058 scope.go:117] "RemoveContainer" containerID="3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.793003 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a\": container with ID starting with 3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a not found: ID does not exist" containerID="3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.793051 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a"} err="failed to get container status \"3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a\": rpc error: code = NotFound desc = could not find container \"3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a\": container with ID starting with 3b7ebdcfc83b1facba0331f5e84e66462e237ef47bb52233ddc53d125120c78a not found: ID does not exist" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.793077 5058 scope.go:117] "RemoveContainer" containerID="1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.795286 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415\": container with ID starting with 1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415 not found: ID does not exist" containerID="1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.795328 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415"} err="failed to get container status \"1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415\": rpc error: code = NotFound desc = could not find container \"1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415\": container with ID starting with 1f674c9b7452431bc555029e0089e333c2180cf333f10716c591a382ecea7415 not found: ID does not exist" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.795356 5058 scope.go:117] "RemoveContainer" containerID="7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.795994 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb\": container with ID starting with 7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb not found: ID does not exist" containerID="7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.796037 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb"} err="failed to get container status \"7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb\": rpc error: code = NotFound desc = could not find container \"7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb\": container with ID starting with 7729c5ae087811b784a8d199331e571be2b9b37e0768b1bc11d6bae7b0a991fb not found: ID does not exist" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.796069 5058 scope.go:117] "RemoveContainer" containerID="939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.797520 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc792cd6-15f5-4ef1-a383-4cecacce0df3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.797552 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.797571 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.797650 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.797689 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.797722 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-scripts\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.797737 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.797754 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbsh\" (UniqueName: \"kubernetes.io/projected/cc792cd6-15f5-4ef1-a383-4cecacce0df3-kube-api-access-qlbsh\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.797769 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc792cd6-15f5-4ef1-a383-4cecacce0df3-logs\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.801434 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274\": container with ID starting with 939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274 not found: ID does not exist" containerID="939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.801473 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274"} err="failed to get container status \"939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274\": rpc error: code = NotFound desc = could not find container \"939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274\": container with ID starting with 939b6bda0700479eaae47d8d0b7f6697323c5c0d6d36bc6b12c9e602e4135274 not found: ID does not exist" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.801498 5058 scope.go:117] "RemoveContainer" containerID="2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.843174 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.853947 5058 scope.go:117] "RemoveContainer" containerID="1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.854219 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.869346 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.879165 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.881756 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.886849 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.888870 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.900853 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc792cd6-15f5-4ef1-a383-4cecacce0df3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.900894 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.900908 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.900932 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-scripts\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.900947 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-run-httpd\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901009 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901039 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901080 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-config-data\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901099 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901124 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-log-httpd\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901151 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-scripts\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901165 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901178 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc792cd6-15f5-4ef1-a383-4cecacce0df3-logs\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901191 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlbsh\" (UniqueName: \"kubernetes.io/projected/cc792cd6-15f5-4ef1-a383-4cecacce0df3-kube-api-access-qlbsh\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901205 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901222 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkxrn\" (UniqueName: \"kubernetes.io/projected/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-kube-api-access-vkxrn\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.901298 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc792cd6-15f5-4ef1-a383-4cecacce0df3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.906050 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc792cd6-15f5-4ef1-a383-4cecacce0df3-logs\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.906375 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.908000 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.908776 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.928454 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.929035 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.939756 5058 scope.go:117] "RemoveContainer" containerID="2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.940757 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39\": container with ID starting with 2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39 not found: ID does not exist" containerID="2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.940822 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39"} err="failed to get container status \"2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39\": rpc error: code = NotFound desc = could not find container \"2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39\": container with ID starting with 2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39 not found: ID does not exist" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.940885 5058 scope.go:117] "RemoveContainer" containerID="1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.941422 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-scripts\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: E1014 07:07:39.943256 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729\": container with ID starting with 1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729 not found: ID does not exist" containerID="1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.943298 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729"} err="failed to get container status \"1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729\": rpc error: code = NotFound desc = could not find container \"1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729\": container with ID starting with 1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729 not found: ID does not exist" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.943318 5058 scope.go:117] "RemoveContainer" containerID="2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.943468 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.943566 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39"} err="failed to get container status \"2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39\": rpc error: code = NotFound desc = could not find container \"2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39\": container with ID starting with 2522c563009a3723f136ce17b39d5073498014d8f4b00166439bf6d81e632d39 not found: ID does not exist" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.943595 5058 scope.go:117] "RemoveContainer" containerID="1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.943962 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729"} err="failed to get container status \"1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729\": rpc error: code = NotFound desc = could not find container \"1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729\": container with ID starting with 1e24b2112344643cb697a17466b6986533f9bc0af3b0780c9f57a4fa9a594729 not found: ID does not exist" Oct 14 07:07:39 crc kubenswrapper[5058]: I1014 07:07:39.947909 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlbsh\" (UniqueName: \"kubernetes.io/projected/cc792cd6-15f5-4ef1-a383-4cecacce0df3-kube-api-access-qlbsh\") pod \"cinder-api-0\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " pod="openstack/cinder-api-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.002775 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.002882 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-config-data\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.003106 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-log-httpd\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.003168 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.003200 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkxrn\" (UniqueName: \"kubernetes.io/projected/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-kube-api-access-vkxrn\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.003256 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-scripts\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.003272 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-run-httpd\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.003741 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-run-httpd\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.007021 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.007267 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-log-httpd\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.008519 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.008621 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-scripts\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.014501 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-config-data\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.019778 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkxrn\" (UniqueName: \"kubernetes.io/projected/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-kube-api-access-vkxrn\") pod \"ceilometer-0\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.050626 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.252361 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:07:40 crc kubenswrapper[5058]: W1014 07:07:40.475726 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc792cd6_15f5_4ef1_a383_4cecacce0df3.slice/crio-75870cd7693993bb01c43b953b219f4d91888762e28052bb43caf791ab31533f WatchSource:0}: Error finding container 75870cd7693993bb01c43b953b219f4d91888762e28052bb43caf791ab31533f: Status 404 returned error can't find the container with id 75870cd7693993bb01c43b953b219f4d91888762e28052bb43caf791ab31533f Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.476464 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.731718 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:07:40 crc kubenswrapper[5058]: W1014 07:07:40.738463 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95e24fee_4ee3_4e3e_b28e_ee0a88884b25.slice/crio-15c23417192b32c9fe684fd533d45e9abcb95f6c40f6dd40347fc26bc1411551 WatchSource:0}: Error finding container 15c23417192b32c9fe684fd533d45e9abcb95f6c40f6dd40347fc26bc1411551: Status 404 returned error can't find the container with id 15c23417192b32c9fe684fd533d45e9abcb95f6c40f6dd40347fc26bc1411551 Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.803147 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35000b48-e362-4178-95ad-648a5495d7a6" path="/var/lib/kubelet/pods/35000b48-e362-4178-95ad-648a5495d7a6/volumes" Oct 14 07:07:40 crc kubenswrapper[5058]: I1014 07:07:40.803759 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5a513c-30a3-4221-a451-ec25eeb46c4e" path="/var/lib/kubelet/pods/4f5a513c-30a3-4221-a451-ec25eeb46c4e/volumes" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.129411 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-96d5856f6-5t2b2"] Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.130982 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.134356 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.134647 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.143750 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-96d5856f6-5t2b2"] Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.229465 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfjxv\" (UniqueName: \"kubernetes.io/projected/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-kube-api-access-xfjxv\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.229736 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-combined-ca-bundle\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.229769 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.229836 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data-custom\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.229990 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-logs\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.230014 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-internal-tls-certs\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.230048 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-public-tls-certs\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.334398 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-logs\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.334449 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-internal-tls-certs\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.334497 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-public-tls-certs\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.334620 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfjxv\" (UniqueName: \"kubernetes.io/projected/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-kube-api-access-xfjxv\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.334654 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-combined-ca-bundle\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.334706 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.334745 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data-custom\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.335153 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-logs\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.346349 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-public-tls-certs\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.347040 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.347050 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data-custom\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.347145 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-internal-tls-certs\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.348041 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-combined-ca-bundle\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.353357 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfjxv\" (UniqueName: \"kubernetes.io/projected/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-kube-api-access-xfjxv\") pod \"barbican-api-96d5856f6-5t2b2\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.478387 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.511645 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerStarted","Data":"fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9"} Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.511941 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerStarted","Data":"15c23417192b32c9fe684fd533d45e9abcb95f6c40f6dd40347fc26bc1411551"} Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.516167 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc792cd6-15f5-4ef1-a383-4cecacce0df3","Type":"ContainerStarted","Data":"75cd20c0073237cb2ae10b086a89b60f8da343d852dbb0faa5aaa253f9871fe4"} Oct 14 07:07:41 crc kubenswrapper[5058]: I1014 07:07:41.516280 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc792cd6-15f5-4ef1-a383-4cecacce0df3","Type":"ContainerStarted","Data":"75870cd7693993bb01c43b953b219f4d91888762e28052bb43caf791ab31533f"} Oct 14 07:07:42 crc kubenswrapper[5058]: I1014 07:07:42.011152 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-96d5856f6-5t2b2"] Oct 14 07:07:42 crc kubenswrapper[5058]: W1014 07:07:42.024059 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3c7b950_e1ff_4b4d_8531_68d7c9ba6e7c.slice/crio-a62871ab34712c99303e554ea4f2853a720abad1b86e43c4121ac42ee1d75e33 WatchSource:0}: Error finding container a62871ab34712c99303e554ea4f2853a720abad1b86e43c4121ac42ee1d75e33: Status 404 returned error can't find the container with id a62871ab34712c99303e554ea4f2853a720abad1b86e43c4121ac42ee1d75e33 Oct 14 07:07:42 crc kubenswrapper[5058]: I1014 07:07:42.530853 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-96d5856f6-5t2b2" event={"ID":"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c","Type":"ContainerStarted","Data":"3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c"} Oct 14 07:07:42 crc kubenswrapper[5058]: I1014 07:07:42.531215 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-96d5856f6-5t2b2" event={"ID":"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c","Type":"ContainerStarted","Data":"a62871ab34712c99303e554ea4f2853a720abad1b86e43c4121ac42ee1d75e33"} Oct 14 07:07:42 crc kubenswrapper[5058]: I1014 07:07:42.532761 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc792cd6-15f5-4ef1-a383-4cecacce0df3","Type":"ContainerStarted","Data":"f5d187c11c0f467f10e4343a5baa9bc8683d7c0ce7601c7a7a218b64e989e1ed"} Oct 14 07:07:42 crc kubenswrapper[5058]: I1014 07:07:42.533939 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 07:07:42 crc kubenswrapper[5058]: I1014 07:07:42.536140 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerStarted","Data":"80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1"} Oct 14 07:07:42 crc kubenswrapper[5058]: I1014 07:07:42.825780 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.825747565 podStartE2EDuration="3.825747565s" podCreationTimestamp="2025-10-14 07:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:42.568345115 +0000 UTC m=+1210.479428931" watchObservedRunningTime="2025-10-14 07:07:42.825747565 +0000 UTC m=+1210.736831381" Oct 14 07:07:43 crc kubenswrapper[5058]: I1014 07:07:43.552131 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-96d5856f6-5t2b2" event={"ID":"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c","Type":"ContainerStarted","Data":"53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a"} Oct 14 07:07:43 crc kubenswrapper[5058]: I1014 07:07:43.552650 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:43 crc kubenswrapper[5058]: I1014 07:07:43.552669 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:43 crc kubenswrapper[5058]: I1014 07:07:43.558301 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerStarted","Data":"ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a"} Oct 14 07:07:43 crc kubenswrapper[5058]: I1014 07:07:43.574279 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-96d5856f6-5t2b2" podStartSLOduration=2.574244466 podStartE2EDuration="2.574244466s" podCreationTimestamp="2025-10-14 07:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:43.572409644 +0000 UTC m=+1211.483493490" watchObservedRunningTime="2025-10-14 07:07:43.574244466 +0000 UTC m=+1211.485328302" Oct 14 07:07:44 crc kubenswrapper[5058]: I1014 07:07:44.571436 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerStarted","Data":"0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2"} Oct 14 07:07:44 crc kubenswrapper[5058]: I1014 07:07:44.598454 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.098427209 podStartE2EDuration="5.59843268s" podCreationTimestamp="2025-10-14 07:07:39 +0000 UTC" firstStartedPulling="2025-10-14 07:07:40.743025974 +0000 UTC m=+1208.654109780" lastFinishedPulling="2025-10-14 07:07:44.243031405 +0000 UTC m=+1212.154115251" observedRunningTime="2025-10-14 07:07:44.59597004 +0000 UTC m=+1212.507053866" watchObservedRunningTime="2025-10-14 07:07:44.59843268 +0000 UTC m=+1212.509516486" Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.096712 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.140496 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.295028 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.349143 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-649d884857-tv6w2"] Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.349388 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-649d884857-tv6w2" podUID="0998b4f5-60ea-4742-b333-e4de1dad4e5f" containerName="dnsmasq-dns" containerID="cri-o://81847f1c4c49de2d93ce7df35b6fbf3e9f6055186bcf98bd822133f2ff45d3d1" gracePeriod=10 Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.590195 5058 generic.go:334] "Generic (PLEG): container finished" podID="0998b4f5-60ea-4742-b333-e4de1dad4e5f" containerID="81847f1c4c49de2d93ce7df35b6fbf3e9f6055186bcf98bd822133f2ff45d3d1" exitCode=0 Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.590813 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c91e794b-6511-467d-a93a-925b4a1d2756" containerName="cinder-scheduler" containerID="cri-o://865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007" gracePeriod=30 Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.590911 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-649d884857-tv6w2" event={"ID":"0998b4f5-60ea-4742-b333-e4de1dad4e5f","Type":"ContainerDied","Data":"81847f1c4c49de2d93ce7df35b6fbf3e9f6055186bcf98bd822133f2ff45d3d1"} Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.591223 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c91e794b-6511-467d-a93a-925b4a1d2756" containerName="probe" containerID="cri-o://add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86" gracePeriod=30 Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.591465 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.861168 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.989517 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-svc\") pod \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.989588 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-nb\") pod \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.989662 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-config\") pod \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.989685 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nhxq\" (UniqueName: \"kubernetes.io/projected/0998b4f5-60ea-4742-b333-e4de1dad4e5f-kube-api-access-5nhxq\") pod \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.989875 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-swift-storage-0\") pod \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " Oct 14 07:07:45 crc kubenswrapper[5058]: I1014 07:07:45.989895 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-sb\") pod \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\" (UID: \"0998b4f5-60ea-4742-b333-e4de1dad4e5f\") " Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.004158 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0998b4f5-60ea-4742-b333-e4de1dad4e5f-kube-api-access-5nhxq" (OuterVolumeSpecName: "kube-api-access-5nhxq") pod "0998b4f5-60ea-4742-b333-e4de1dad4e5f" (UID: "0998b4f5-60ea-4742-b333-e4de1dad4e5f"). InnerVolumeSpecName "kube-api-access-5nhxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.048494 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0998b4f5-60ea-4742-b333-e4de1dad4e5f" (UID: "0998b4f5-60ea-4742-b333-e4de1dad4e5f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.051384 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0998b4f5-60ea-4742-b333-e4de1dad4e5f" (UID: "0998b4f5-60ea-4742-b333-e4de1dad4e5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.058335 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0998b4f5-60ea-4742-b333-e4de1dad4e5f" (UID: "0998b4f5-60ea-4742-b333-e4de1dad4e5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.084291 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-config" (OuterVolumeSpecName: "config") pod "0998b4f5-60ea-4742-b333-e4de1dad4e5f" (UID: "0998b4f5-60ea-4742-b333-e4de1dad4e5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.092418 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.092463 5058 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.092480 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.092491 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.092503 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nhxq\" (UniqueName: \"kubernetes.io/projected/0998b4f5-60ea-4742-b333-e4de1dad4e5f-kube-api-access-5nhxq\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.100263 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0998b4f5-60ea-4742-b333-e4de1dad4e5f" (UID: "0998b4f5-60ea-4742-b333-e4de1dad4e5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.193864 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0998b4f5-60ea-4742-b333-e4de1dad4e5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.323683 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.600507 5058 generic.go:334] "Generic (PLEG): container finished" podID="c91e794b-6511-467d-a93a-925b4a1d2756" containerID="add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86" exitCode=0 Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.600600 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c91e794b-6511-467d-a93a-925b4a1d2756","Type":"ContainerDied","Data":"add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86"} Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.602609 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-649d884857-tv6w2" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.602651 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-649d884857-tv6w2" event={"ID":"0998b4f5-60ea-4742-b333-e4de1dad4e5f","Type":"ContainerDied","Data":"da6126f16517bf14fe27dd135d6749083715efd656c56ef5400f56a016a97177"} Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.602683 5058 scope.go:117] "RemoveContainer" containerID="81847f1c4c49de2d93ce7df35b6fbf3e9f6055186bcf98bd822133f2ff45d3d1" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.634959 5058 scope.go:117] "RemoveContainer" containerID="699c32a0319df221a5207c8b09c9daa103c1e68203b2494c7d5815d06d84a20c" Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.642253 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-649d884857-tv6w2"] Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.654751 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-649d884857-tv6w2"] Oct 14 07:07:46 crc kubenswrapper[5058]: I1014 07:07:46.805343 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0998b4f5-60ea-4742-b333-e4de1dad4e5f" path="/var/lib/kubelet/pods/0998b4f5-60ea-4742-b333-e4de1dad4e5f/volumes" Oct 14 07:07:47 crc kubenswrapper[5058]: I1014 07:07:47.166663 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:47 crc kubenswrapper[5058]: I1014 07:07:47.174511 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.079972 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.137337 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.157403 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data\") pod \"c91e794b-6511-467d-a93a-925b4a1d2756\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.157561 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb4l9\" (UniqueName: \"kubernetes.io/projected/c91e794b-6511-467d-a93a-925b4a1d2756-kube-api-access-qb4l9\") pod \"c91e794b-6511-467d-a93a-925b4a1d2756\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.157636 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c91e794b-6511-467d-a93a-925b4a1d2756-etc-machine-id\") pod \"c91e794b-6511-467d-a93a-925b4a1d2756\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.157785 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-scripts\") pod \"c91e794b-6511-467d-a93a-925b4a1d2756\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.158019 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-combined-ca-bundle\") pod \"c91e794b-6511-467d-a93a-925b4a1d2756\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.158118 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data-custom\") pod \"c91e794b-6511-467d-a93a-925b4a1d2756\" (UID: \"c91e794b-6511-467d-a93a-925b4a1d2756\") " Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.162951 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c91e794b-6511-467d-a93a-925b4a1d2756-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c91e794b-6511-467d-a93a-925b4a1d2756" (UID: "c91e794b-6511-467d-a93a-925b4a1d2756"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.176392 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c91e794b-6511-467d-a93a-925b4a1d2756-kube-api-access-qb4l9" (OuterVolumeSpecName: "kube-api-access-qb4l9") pod "c91e794b-6511-467d-a93a-925b4a1d2756" (UID: "c91e794b-6511-467d-a93a-925b4a1d2756"). InnerVolumeSpecName "kube-api-access-qb4l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.176894 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-scripts" (OuterVolumeSpecName: "scripts") pod "c91e794b-6511-467d-a93a-925b4a1d2756" (UID: "c91e794b-6511-467d-a93a-925b4a1d2756"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.182952 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c91e794b-6511-467d-a93a-925b4a1d2756" (UID: "c91e794b-6511-467d-a93a-925b4a1d2756"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.226758 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-679d46664d-5nhrm"] Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.227024 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-679d46664d-5nhrm" podUID="a986188b-990b-4314-b1cd-a6586b8bea94" containerName="neutron-api" containerID="cri-o://1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12" gracePeriod=30 Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.227521 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-679d46664d-5nhrm" podUID="a986188b-990b-4314-b1cd-a6586b8bea94" containerName="neutron-httpd" containerID="cri-o://1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7" gracePeriod=30 Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.260592 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb4l9\" (UniqueName: \"kubernetes.io/projected/c91e794b-6511-467d-a93a-925b4a1d2756-kube-api-access-qb4l9\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.260628 5058 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c91e794b-6511-467d-a93a-925b4a1d2756-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.260641 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.260651 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.289179 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c91e794b-6511-467d-a93a-925b4a1d2756" (UID: "c91e794b-6511-467d-a93a-925b4a1d2756"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.331058 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data" (OuterVolumeSpecName: "config-data") pod "c91e794b-6511-467d-a93a-925b4a1d2756" (UID: "c91e794b-6511-467d-a93a-925b4a1d2756"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.361859 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.361889 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c91e794b-6511-467d-a93a-925b4a1d2756-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.410618 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.636936 5058 generic.go:334] "Generic (PLEG): container finished" podID="c91e794b-6511-467d-a93a-925b4a1d2756" containerID="865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007" exitCode=0 Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.636972 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c91e794b-6511-467d-a93a-925b4a1d2756","Type":"ContainerDied","Data":"865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007"} Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.637289 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c91e794b-6511-467d-a93a-925b4a1d2756","Type":"ContainerDied","Data":"8cf8f3a694f1eb364a65ef89975bf25b3387cd638f20872e144950c110436e2c"} Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.637311 5058 scope.go:117] "RemoveContainer" containerID="add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.637062 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.655374 5058 generic.go:334] "Generic (PLEG): container finished" podID="a986188b-990b-4314-b1cd-a6586b8bea94" containerID="1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7" exitCode=0 Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.655432 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-679d46664d-5nhrm" event={"ID":"a986188b-990b-4314-b1cd-a6586b8bea94","Type":"ContainerDied","Data":"1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7"} Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.685939 5058 scope.go:117] "RemoveContainer" containerID="865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.689836 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.697023 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.716861 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:07:49 crc kubenswrapper[5058]: E1014 07:07:49.717268 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91e794b-6511-467d-a93a-925b4a1d2756" containerName="cinder-scheduler" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.717286 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91e794b-6511-467d-a93a-925b4a1d2756" containerName="cinder-scheduler" Oct 14 07:07:49 crc kubenswrapper[5058]: E1014 07:07:49.717306 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91e794b-6511-467d-a93a-925b4a1d2756" containerName="probe" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.717313 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91e794b-6511-467d-a93a-925b4a1d2756" containerName="probe" Oct 14 07:07:49 crc kubenswrapper[5058]: E1014 07:07:49.717330 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0998b4f5-60ea-4742-b333-e4de1dad4e5f" containerName="dnsmasq-dns" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.717336 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0998b4f5-60ea-4742-b333-e4de1dad4e5f" containerName="dnsmasq-dns" Oct 14 07:07:49 crc kubenswrapper[5058]: E1014 07:07:49.717349 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0998b4f5-60ea-4742-b333-e4de1dad4e5f" containerName="init" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.717354 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0998b4f5-60ea-4742-b333-e4de1dad4e5f" containerName="init" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.717507 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0998b4f5-60ea-4742-b333-e4de1dad4e5f" containerName="dnsmasq-dns" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.717517 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91e794b-6511-467d-a93a-925b4a1d2756" containerName="cinder-scheduler" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.717525 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91e794b-6511-467d-a93a-925b4a1d2756" containerName="probe" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.718488 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.720314 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.723576 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.776243 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgkm5\" (UniqueName: \"kubernetes.io/projected/78ebf4e1-4257-4a45-a776-734a016d955f-kube-api-access-mgkm5\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.776303 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.776360 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78ebf4e1-4257-4a45-a776-734a016d955f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.776379 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.776445 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-scripts\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.776497 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.777551 5058 scope.go:117] "RemoveContainer" containerID="add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86" Oct 14 07:07:49 crc kubenswrapper[5058]: E1014 07:07:49.778137 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86\": container with ID starting with add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86 not found: ID does not exist" containerID="add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.778166 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86"} err="failed to get container status \"add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86\": rpc error: code = NotFound desc = could not find container \"add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86\": container with ID starting with add6b683a70ed82ea5dea1f6a55e5ff31d1d516e8d07b60930865a5ddc4d1b86 not found: ID does not exist" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.778191 5058 scope.go:117] "RemoveContainer" containerID="865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007" Oct 14 07:07:49 crc kubenswrapper[5058]: E1014 07:07:49.778647 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007\": container with ID starting with 865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007 not found: ID does not exist" containerID="865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.778695 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007"} err="failed to get container status \"865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007\": rpc error: code = NotFound desc = could not find container \"865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007\": container with ID starting with 865f54c2d9666798c54f3fcd0933f0531473a99316aeb169263d93d1332cb007 not found: ID does not exist" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.877711 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgkm5\" (UniqueName: \"kubernetes.io/projected/78ebf4e1-4257-4a45-a776-734a016d955f-kube-api-access-mgkm5\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.877773 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.877829 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78ebf4e1-4257-4a45-a776-734a016d955f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.877848 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.877897 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-scripts\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.877936 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.878421 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78ebf4e1-4257-4a45-a776-734a016d955f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.881740 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-scripts\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.882170 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.882516 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.882678 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:49 crc kubenswrapper[5058]: I1014 07:07:49.903001 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgkm5\" (UniqueName: \"kubernetes.io/projected/78ebf4e1-4257-4a45-a776-734a016d955f-kube-api-access-mgkm5\") pod \"cinder-scheduler-0\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " pod="openstack/cinder-scheduler-0" Oct 14 07:07:50 crc kubenswrapper[5058]: I1014 07:07:50.072061 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 07:07:50 crc kubenswrapper[5058]: I1014 07:07:50.527718 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:07:50 crc kubenswrapper[5058]: I1014 07:07:50.680739 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"78ebf4e1-4257-4a45-a776-734a016d955f","Type":"ContainerStarted","Data":"9398d9b7ed17d8f16f3f4facee3c7ca860ebac6ecedac96fea5e4f9c7afe65c3"} Oct 14 07:07:50 crc kubenswrapper[5058]: I1014 07:07:50.812582 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c91e794b-6511-467d-a93a-925b4a1d2756" path="/var/lib/kubelet/pods/c91e794b-6511-467d-a93a-925b4a1d2756/volumes" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.242545 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.305713 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-config\") pod \"a986188b-990b-4314-b1cd-a6586b8bea94\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.306231 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-combined-ca-bundle\") pod \"a986188b-990b-4314-b1cd-a6586b8bea94\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.306325 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-ovndb-tls-certs\") pod \"a986188b-990b-4314-b1cd-a6586b8bea94\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.306517 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/a986188b-990b-4314-b1cd-a6586b8bea94-kube-api-access-69q6g\") pod \"a986188b-990b-4314-b1cd-a6586b8bea94\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.306562 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-httpd-config\") pod \"a986188b-990b-4314-b1cd-a6586b8bea94\" (UID: \"a986188b-990b-4314-b1cd-a6586b8bea94\") " Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.313841 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a986188b-990b-4314-b1cd-a6586b8bea94-kube-api-access-69q6g" (OuterVolumeSpecName: "kube-api-access-69q6g") pod "a986188b-990b-4314-b1cd-a6586b8bea94" (UID: "a986188b-990b-4314-b1cd-a6586b8bea94"). InnerVolumeSpecName "kube-api-access-69q6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.314265 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a986188b-990b-4314-b1cd-a6586b8bea94" (UID: "a986188b-990b-4314-b1cd-a6586b8bea94"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.385098 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a986188b-990b-4314-b1cd-a6586b8bea94" (UID: "a986188b-990b-4314-b1cd-a6586b8bea94"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.388131 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a986188b-990b-4314-b1cd-a6586b8bea94" (UID: "a986188b-990b-4314-b1cd-a6586b8bea94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.408832 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69q6g\" (UniqueName: \"kubernetes.io/projected/a986188b-990b-4314-b1cd-a6586b8bea94-kube-api-access-69q6g\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.408858 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.408869 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.408877 5058 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.410955 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-config" (OuterVolumeSpecName: "config") pod "a986188b-990b-4314-b1cd-a6586b8bea94" (UID: "a986188b-990b-4314-b1cd-a6586b8bea94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.509835 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a986188b-990b-4314-b1cd-a6586b8bea94-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.719319 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"78ebf4e1-4257-4a45-a776-734a016d955f","Type":"ContainerStarted","Data":"3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf"} Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.724400 5058 generic.go:334] "Generic (PLEG): container finished" podID="a986188b-990b-4314-b1cd-a6586b8bea94" containerID="1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12" exitCode=0 Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.724440 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-679d46664d-5nhrm" event={"ID":"a986188b-990b-4314-b1cd-a6586b8bea94","Type":"ContainerDied","Data":"1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12"} Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.724472 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-679d46664d-5nhrm" event={"ID":"a986188b-990b-4314-b1cd-a6586b8bea94","Type":"ContainerDied","Data":"4870b627584e0f2634001cff067f5c78383a6b7823a4a903201f53e3c9c74e07"} Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.724493 5058 scope.go:117] "RemoveContainer" containerID="1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.724503 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-679d46664d-5nhrm" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.746415 5058 scope.go:117] "RemoveContainer" containerID="1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.820256 5058 scope.go:117] "RemoveContainer" containerID="1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7" Oct 14 07:07:51 crc kubenswrapper[5058]: E1014 07:07:51.821186 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7\": container with ID starting with 1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7 not found: ID does not exist" containerID="1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.821526 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7"} err="failed to get container status \"1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7\": rpc error: code = NotFound desc = could not find container \"1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7\": container with ID starting with 1b30c1ef189cab825c5769e158a13ba5e453894cf74747b3854682c67c0db9a7 not found: ID does not exist" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.821652 5058 scope.go:117] "RemoveContainer" containerID="1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12" Oct 14 07:07:51 crc kubenswrapper[5058]: E1014 07:07:51.822618 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12\": container with ID starting with 1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12 not found: ID does not exist" containerID="1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.822720 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12"} err="failed to get container status \"1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12\": rpc error: code = NotFound desc = could not find container \"1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12\": container with ID starting with 1d6ab572de51f74d7f61979a6bd302ecbc159f01d34ef13dc4f6629f84015e12 not found: ID does not exist" Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.826193 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-679d46664d-5nhrm"] Oct 14 07:07:51 crc kubenswrapper[5058]: I1014 07:07:51.849184 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-679d46664d-5nhrm"] Oct 14 07:07:52 crc kubenswrapper[5058]: I1014 07:07:52.181662 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 14 07:07:52 crc kubenswrapper[5058]: I1014 07:07:52.735155 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"78ebf4e1-4257-4a45-a776-734a016d955f","Type":"ContainerStarted","Data":"968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a"} Oct 14 07:07:52 crc kubenswrapper[5058]: I1014 07:07:52.757890 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.757869376 podStartE2EDuration="3.757869376s" podCreationTimestamp="2025-10-14 07:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:07:52.750876076 +0000 UTC m=+1220.661959882" watchObservedRunningTime="2025-10-14 07:07:52.757869376 +0000 UTC m=+1220.668953182" Oct 14 07:07:52 crc kubenswrapper[5058]: I1014 07:07:52.801559 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a986188b-990b-4314-b1cd-a6586b8bea94" path="/var/lib/kubelet/pods/a986188b-990b-4314-b1cd-a6586b8bea94/volumes" Oct 14 07:07:53 crc kubenswrapper[5058]: I1014 07:07:53.041246 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:53 crc kubenswrapper[5058]: I1014 07:07:53.223881 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:07:53 crc kubenswrapper[5058]: I1014 07:07:53.295983 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74dcb85b56-gfnsw"] Oct 14 07:07:53 crc kubenswrapper[5058]: I1014 07:07:53.296248 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74dcb85b56-gfnsw" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api-log" containerID="cri-o://8e1f702490aeef4623164cb3994e0fb31c9d8f6e1c4729188196b4aa0d6853db" gracePeriod=30 Oct 14 07:07:53 crc kubenswrapper[5058]: I1014 07:07:53.296785 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74dcb85b56-gfnsw" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api" containerID="cri-o://141b30bf55fa76ef2c981955c4336c7e83f02fae4881de9365c56fb60e038785" gracePeriod=30 Oct 14 07:07:53 crc kubenswrapper[5058]: I1014 07:07:53.751413 5058 generic.go:334] "Generic (PLEG): container finished" podID="918cfbce-acfd-4728-927c-548ee1960b8d" containerID="8e1f702490aeef4623164cb3994e0fb31c9d8f6e1c4729188196b4aa0d6853db" exitCode=143 Oct 14 07:07:53 crc kubenswrapper[5058]: I1014 07:07:53.751997 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dcb85b56-gfnsw" event={"ID":"918cfbce-acfd-4728-927c-548ee1960b8d","Type":"ContainerDied","Data":"8e1f702490aeef4623164cb3994e0fb31c9d8f6e1c4729188196b4aa0d6853db"} Oct 14 07:07:55 crc kubenswrapper[5058]: I1014 07:07:55.073770 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 07:07:56 crc kubenswrapper[5058]: I1014 07:07:56.494556 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74dcb85b56-gfnsw" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:52038->10.217.0.163:9311: read: connection reset by peer" Oct 14 07:07:56 crc kubenswrapper[5058]: I1014 07:07:56.494638 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74dcb85b56-gfnsw" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:52046->10.217.0.163:9311: read: connection reset by peer" Oct 14 07:07:56 crc kubenswrapper[5058]: I1014 07:07:56.790190 5058 generic.go:334] "Generic (PLEG): container finished" podID="918cfbce-acfd-4728-927c-548ee1960b8d" containerID="141b30bf55fa76ef2c981955c4336c7e83f02fae4881de9365c56fb60e038785" exitCode=0 Oct 14 07:07:56 crc kubenswrapper[5058]: I1014 07:07:56.805459 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dcb85b56-gfnsw" event={"ID":"918cfbce-acfd-4728-927c-548ee1960b8d","Type":"ContainerDied","Data":"141b30bf55fa76ef2c981955c4336c7e83f02fae4881de9365c56fb60e038785"} Oct 14 07:07:56 crc kubenswrapper[5058]: I1014 07:07:56.885488 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:07:56 crc kubenswrapper[5058]: I1014 07:07:56.911762 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.021184 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-combined-ca-bundle\") pod \"918cfbce-acfd-4728-927c-548ee1960b8d\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.021238 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data\") pod \"918cfbce-acfd-4728-927c-548ee1960b8d\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.021345 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data-custom\") pod \"918cfbce-acfd-4728-927c-548ee1960b8d\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.021394 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918cfbce-acfd-4728-927c-548ee1960b8d-logs\") pod \"918cfbce-acfd-4728-927c-548ee1960b8d\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.021458 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gx7c\" (UniqueName: \"kubernetes.io/projected/918cfbce-acfd-4728-927c-548ee1960b8d-kube-api-access-4gx7c\") pod \"918cfbce-acfd-4728-927c-548ee1960b8d\" (UID: \"918cfbce-acfd-4728-927c-548ee1960b8d\") " Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.022277 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918cfbce-acfd-4728-927c-548ee1960b8d-logs" (OuterVolumeSpecName: "logs") pod "918cfbce-acfd-4728-927c-548ee1960b8d" (UID: "918cfbce-acfd-4728-927c-548ee1960b8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.026633 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "918cfbce-acfd-4728-927c-548ee1960b8d" (UID: "918cfbce-acfd-4728-927c-548ee1960b8d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.057024 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "918cfbce-acfd-4728-927c-548ee1960b8d" (UID: "918cfbce-acfd-4728-927c-548ee1960b8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.060087 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918cfbce-acfd-4728-927c-548ee1960b8d-kube-api-access-4gx7c" (OuterVolumeSpecName: "kube-api-access-4gx7c") pod "918cfbce-acfd-4728-927c-548ee1960b8d" (UID: "918cfbce-acfd-4728-927c-548ee1960b8d"). InnerVolumeSpecName "kube-api-access-4gx7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.093403 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data" (OuterVolumeSpecName: "config-data") pod "918cfbce-acfd-4728-927c-548ee1960b8d" (UID: "918cfbce-acfd-4728-927c-548ee1960b8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.123041 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.123061 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.123070 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918cfbce-acfd-4728-927c-548ee1960b8d-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.123080 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918cfbce-acfd-4728-927c-548ee1960b8d-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.123089 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gx7c\" (UniqueName: \"kubernetes.io/projected/918cfbce-acfd-4728-927c-548ee1960b8d-kube-api-access-4gx7c\") on node \"crc\" DevicePath \"\"" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.804186 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74dcb85b56-gfnsw" event={"ID":"918cfbce-acfd-4728-927c-548ee1960b8d","Type":"ContainerDied","Data":"875df13a1207c797d663fb0a08daeee6031678452ac6e0f196288680140d066f"} Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.804268 5058 scope.go:117] "RemoveContainer" containerID="141b30bf55fa76ef2c981955c4336c7e83f02fae4881de9365c56fb60e038785" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.804258 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74dcb85b56-gfnsw" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.850903 5058 scope.go:117] "RemoveContainer" containerID="8e1f702490aeef4623164cb3994e0fb31c9d8f6e1c4729188196b4aa0d6853db" Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.861657 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74dcb85b56-gfnsw"] Oct 14 07:07:57 crc kubenswrapper[5058]: I1014 07:07:57.871540 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74dcb85b56-gfnsw"] Oct 14 07:07:58 crc kubenswrapper[5058]: I1014 07:07:58.831110 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" path="/var/lib/kubelet/pods/918cfbce-acfd-4728-927c-548ee1960b8d/volumes" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.604105 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 07:07:59 crc kubenswrapper[5058]: E1014 07:07:59.604841 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a986188b-990b-4314-b1cd-a6586b8bea94" containerName="neutron-httpd" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.604859 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a986188b-990b-4314-b1cd-a6586b8bea94" containerName="neutron-httpd" Oct 14 07:07:59 crc kubenswrapper[5058]: E1014 07:07:59.604886 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api-log" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.604894 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api-log" Oct 14 07:07:59 crc kubenswrapper[5058]: E1014 07:07:59.604919 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a986188b-990b-4314-b1cd-a6586b8bea94" containerName="neutron-api" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.604927 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a986188b-990b-4314-b1cd-a6586b8bea94" containerName="neutron-api" Oct 14 07:07:59 crc kubenswrapper[5058]: E1014 07:07:59.604944 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.604951 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.605160 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a986188b-990b-4314-b1cd-a6586b8bea94" containerName="neutron-httpd" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.605183 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a986188b-990b-4314-b1cd-a6586b8bea94" containerName="neutron-api" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.605205 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.605220 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="918cfbce-acfd-4728-927c-548ee1960b8d" containerName="barbican-api-log" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.605995 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.608400 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-zwvdb" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.609062 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.609229 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.618889 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.671830 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.671875 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config-secret\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.671967 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtmgt\" (UniqueName: \"kubernetes.io/projected/95fc1e78-622a-4acc-865a-0f9e95bac6b3-kube-api-access-mtmgt\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.672019 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.774026 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.774110 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config-secret\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.774349 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtmgt\" (UniqueName: \"kubernetes.io/projected/95fc1e78-622a-4acc-865a-0f9e95bac6b3-kube-api-access-mtmgt\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.774513 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.776174 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.781231 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config-secret\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.782295 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.793742 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtmgt\" (UniqueName: \"kubernetes.io/projected/95fc1e78-622a-4acc-865a-0f9e95bac6b3-kube-api-access-mtmgt\") pod \"openstackclient\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " pod="openstack/openstackclient" Oct 14 07:07:59 crc kubenswrapper[5058]: I1014 07:07:59.965728 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 07:08:00 crc kubenswrapper[5058]: I1014 07:08:00.317656 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 07:08:00 crc kubenswrapper[5058]: I1014 07:08:00.424092 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 07:08:00 crc kubenswrapper[5058]: I1014 07:08:00.862821 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"95fc1e78-622a-4acc-865a-0f9e95bac6b3","Type":"ContainerStarted","Data":"9f857ad1b0d365e9ccd06321990a5d512ac6e1af579aac229676780afe20e5f8"} Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.702977 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-54cccf7b97-zh6kt"] Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.705325 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.707300 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.709038 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.709656 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.719909 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54cccf7b97-zh6kt"] Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.853351 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-internal-tls-certs\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.853633 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-public-tls-certs\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.853752 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-run-httpd\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.853905 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-etc-swift\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.854039 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-log-httpd\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.854151 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-config-data\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.854240 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-combined-ca-bundle\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.854362 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wb5c\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-kube-api-access-9wb5c\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.955590 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wb5c\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-kube-api-access-9wb5c\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.955720 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-internal-tls-certs\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.955811 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-public-tls-certs\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.955902 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-run-httpd\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.956054 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-etc-swift\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.956112 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-log-httpd\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.956127 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-config-data\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.956143 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-combined-ca-bundle\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.961439 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-run-httpd\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.963218 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-log-httpd\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.966389 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-etc-swift\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.969566 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-internal-tls-certs\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.974110 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wb5c\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-kube-api-access-9wb5c\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.976694 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-config-data\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.979342 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-public-tls-certs\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:03 crc kubenswrapper[5058]: I1014 07:08:03.979978 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-combined-ca-bundle\") pod \"swift-proxy-54cccf7b97-zh6kt\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.030697 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.252182 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.252918 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="ceilometer-central-agent" containerID="cri-o://fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9" gracePeriod=30 Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.253464 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="proxy-httpd" containerID="cri-o://0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2" gracePeriod=30 Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.253566 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="sg-core" containerID="cri-o://ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a" gracePeriod=30 Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.253630 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="ceilometer-notification-agent" containerID="cri-o://80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1" gracePeriod=30 Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.277494 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.601080 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54cccf7b97-zh6kt"] Oct 14 07:08:04 crc kubenswrapper[5058]: W1014 07:08:04.607987 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0820570e_22cb_4f0b_9ee4_f5237ccdfdff.slice/crio-d7b00d866f7d27abab215d0c3e4446f03c69f89376adc0e46b0389a7ca95b08f WatchSource:0}: Error finding container d7b00d866f7d27abab215d0c3e4446f03c69f89376adc0e46b0389a7ca95b08f: Status 404 returned error can't find the container with id d7b00d866f7d27abab215d0c3e4446f03c69f89376adc0e46b0389a7ca95b08f Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.901393 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54cccf7b97-zh6kt" event={"ID":"0820570e-22cb-4f0b-9ee4-f5237ccdfdff","Type":"ContainerStarted","Data":"d7b00d866f7d27abab215d0c3e4446f03c69f89376adc0e46b0389a7ca95b08f"} Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.905643 5058 generic.go:334] "Generic (PLEG): container finished" podID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerID="0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2" exitCode=0 Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.905675 5058 generic.go:334] "Generic (PLEG): container finished" podID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerID="ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a" exitCode=2 Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.905684 5058 generic.go:334] "Generic (PLEG): container finished" podID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerID="fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9" exitCode=0 Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.905687 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerDied","Data":"0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2"} Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.905713 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerDied","Data":"ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a"} Oct 14 07:08:04 crc kubenswrapper[5058]: I1014 07:08:04.905725 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerDied","Data":"fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9"} Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.418753 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.562439 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkxrn\" (UniqueName: \"kubernetes.io/projected/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-kube-api-access-vkxrn\") pod \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.562500 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-config-data\") pod \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.562539 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-run-httpd\") pod \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.562583 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-log-httpd\") pod \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.562621 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-sg-core-conf-yaml\") pod \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.562658 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-combined-ca-bundle\") pod \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.562710 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-scripts\") pod \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\" (UID: \"95e24fee-4ee3-4e3e-b28e-ee0a88884b25\") " Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.563650 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "95e24fee-4ee3-4e3e-b28e-ee0a88884b25" (UID: "95e24fee-4ee3-4e3e-b28e-ee0a88884b25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.564147 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "95e24fee-4ee3-4e3e-b28e-ee0a88884b25" (UID: "95e24fee-4ee3-4e3e-b28e-ee0a88884b25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.567448 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-kube-api-access-vkxrn" (OuterVolumeSpecName: "kube-api-access-vkxrn") pod "95e24fee-4ee3-4e3e-b28e-ee0a88884b25" (UID: "95e24fee-4ee3-4e3e-b28e-ee0a88884b25"). InnerVolumeSpecName "kube-api-access-vkxrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.568772 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-scripts" (OuterVolumeSpecName: "scripts") pod "95e24fee-4ee3-4e3e-b28e-ee0a88884b25" (UID: "95e24fee-4ee3-4e3e-b28e-ee0a88884b25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.610942 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "95e24fee-4ee3-4e3e-b28e-ee0a88884b25" (UID: "95e24fee-4ee3-4e3e-b28e-ee0a88884b25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.653038 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95e24fee-4ee3-4e3e-b28e-ee0a88884b25" (UID: "95e24fee-4ee3-4e3e-b28e-ee0a88884b25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.657983 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-config-data" (OuterVolumeSpecName: "config-data") pod "95e24fee-4ee3-4e3e-b28e-ee0a88884b25" (UID: "95e24fee-4ee3-4e3e-b28e-ee0a88884b25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.665287 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkxrn\" (UniqueName: \"kubernetes.io/projected/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-kube-api-access-vkxrn\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.665324 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.665338 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.665349 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.665364 5058 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.665374 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.665385 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95e24fee-4ee3-4e3e-b28e-ee0a88884b25-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.957462 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54cccf7b97-zh6kt" event={"ID":"0820570e-22cb-4f0b-9ee4-f5237ccdfdff","Type":"ContainerStarted","Data":"83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a"} Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.957510 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54cccf7b97-zh6kt" event={"ID":"0820570e-22cb-4f0b-9ee4-f5237ccdfdff","Type":"ContainerStarted","Data":"144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b"} Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.957550 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.960302 5058 generic.go:334] "Generic (PLEG): container finished" podID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerID="80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1" exitCode=0 Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.960357 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerDied","Data":"80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1"} Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.960363 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.960393 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95e24fee-4ee3-4e3e-b28e-ee0a88884b25","Type":"ContainerDied","Data":"15c23417192b32c9fe684fd533d45e9abcb95f6c40f6dd40347fc26bc1411551"} Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.960416 5058 scope.go:117] "RemoveContainer" containerID="0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2" Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.962724 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"95fc1e78-622a-4acc-865a-0f9e95bac6b3","Type":"ContainerStarted","Data":"ccd63adf9ff2533f48ef01689c4ae84b7ad678fd3261c983558f6e60e740f9b0"} Oct 14 07:08:09 crc kubenswrapper[5058]: I1014 07:08:09.975530 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-54cccf7b97-zh6kt" podStartSLOduration=6.975513018 podStartE2EDuration="6.975513018s" podCreationTimestamp="2025-10-14 07:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:08:09.972945735 +0000 UTC m=+1237.884029581" watchObservedRunningTime="2025-10-14 07:08:09.975513018 +0000 UTC m=+1237.886596834" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.008158 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.2908423620000002 podStartE2EDuration="11.008137315s" podCreationTimestamp="2025-10-14 07:07:59 +0000 UTC" firstStartedPulling="2025-10-14 07:08:00.424471223 +0000 UTC m=+1228.335555029" lastFinishedPulling="2025-10-14 07:08:09.141766176 +0000 UTC m=+1237.052849982" observedRunningTime="2025-10-14 07:08:10.006833448 +0000 UTC m=+1237.917917264" watchObservedRunningTime="2025-10-14 07:08:10.008137315 +0000 UTC m=+1237.919221131" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.016638 5058 scope.go:117] "RemoveContainer" containerID="ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.045876 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.046024 5058 scope.go:117] "RemoveContainer" containerID="80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.065515 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.070108 5058 scope.go:117] "RemoveContainer" containerID="fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.079877 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:10 crc kubenswrapper[5058]: E1014 07:08:10.080200 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="ceilometer-central-agent" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.080215 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="ceilometer-central-agent" Oct 14 07:08:10 crc kubenswrapper[5058]: E1014 07:08:10.080241 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="proxy-httpd" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.080247 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="proxy-httpd" Oct 14 07:08:10 crc kubenswrapper[5058]: E1014 07:08:10.080258 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="sg-core" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.080265 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="sg-core" Oct 14 07:08:10 crc kubenswrapper[5058]: E1014 07:08:10.080275 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="ceilometer-notification-agent" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.080281 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="ceilometer-notification-agent" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.080462 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="ceilometer-notification-agent" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.080479 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="ceilometer-central-agent" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.080488 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="proxy-httpd" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.080506 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" containerName="sg-core" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.082074 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.084296 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.084524 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.092616 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.150004 5058 scope.go:117] "RemoveContainer" containerID="0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2" Oct 14 07:08:10 crc kubenswrapper[5058]: E1014 07:08:10.150532 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2\": container with ID starting with 0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2 not found: ID does not exist" containerID="0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.150574 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2"} err="failed to get container status \"0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2\": rpc error: code = NotFound desc = could not find container \"0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2\": container with ID starting with 0549c9080243fb559676cfa83581a906f811c791b1c58b2be72c0290597a70b2 not found: ID does not exist" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.150601 5058 scope.go:117] "RemoveContainer" containerID="ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a" Oct 14 07:08:10 crc kubenswrapper[5058]: E1014 07:08:10.150886 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a\": container with ID starting with ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a not found: ID does not exist" containerID="ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.150907 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a"} err="failed to get container status \"ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a\": rpc error: code = NotFound desc = could not find container \"ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a\": container with ID starting with ad5b975fe0012b6c16b0fc6e11f20c035f2e793f7b8881ca2281864a7c51086a not found: ID does not exist" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.150920 5058 scope.go:117] "RemoveContainer" containerID="80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1" Oct 14 07:08:10 crc kubenswrapper[5058]: E1014 07:08:10.151321 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1\": container with ID starting with 80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1 not found: ID does not exist" containerID="80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.151351 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1"} err="failed to get container status \"80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1\": rpc error: code = NotFound desc = could not find container \"80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1\": container with ID starting with 80ca6cb0568a9433a0b3b2986c7e933798e5f9e7b07d9e7a026e8d617620bdb1 not found: ID does not exist" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.151366 5058 scope.go:117] "RemoveContainer" containerID="fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9" Oct 14 07:08:10 crc kubenswrapper[5058]: E1014 07:08:10.151582 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9\": container with ID starting with fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9 not found: ID does not exist" containerID="fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.151601 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9"} err="failed to get container status \"fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9\": rpc error: code = NotFound desc = could not find container \"fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9\": container with ID starting with fcf33bfd8dc0dbbbcb73e7ebecd48546f3c7ceff9e1bbc87e65869b82ace6ea9 not found: ID does not exist" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.172650 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-log-httpd\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.172688 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-config-data\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.172718 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6vgh\" (UniqueName: \"kubernetes.io/projected/14da58d5-5676-4c24-8893-d9ab35c81a69-kube-api-access-f6vgh\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.172746 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.173118 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-run-httpd\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.173310 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-scripts\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.173363 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.274901 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-run-httpd\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.275024 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-scripts\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.275062 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.275118 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-log-httpd\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.275139 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-config-data\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.275172 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6vgh\" (UniqueName: \"kubernetes.io/projected/14da58d5-5676-4c24-8893-d9ab35c81a69-kube-api-access-f6vgh\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.275206 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.276548 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-run-httpd\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.276618 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-log-httpd\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.280023 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-scripts\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.280614 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.281450 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.286939 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-config-data\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.295642 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6vgh\" (UniqueName: \"kubernetes.io/projected/14da58d5-5676-4c24-8893-d9ab35c81a69-kube-api-access-f6vgh\") pod \"ceilometer-0\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.438184 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.438499 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-httpd" containerID="cri-o://f4c19c60ec6d3a4d8a03bc9568c6fb719e8e7f9d8261834a127d07601bddbda1" gracePeriod=30 Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.438654 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-log" containerID="cri-o://22d0e119b6d953d94a386b6ad80d277a41637914a464324bd83ec47b8122ab60" gracePeriod=30 Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.446397 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.805371 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e24fee-4ee3-4e3e-b28e-ee0a88884b25" path="/var/lib/kubelet/pods/95e24fee-4ee3-4e3e-b28e-ee0a88884b25/volumes" Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.982113 5058 generic.go:334] "Generic (PLEG): container finished" podID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerID="22d0e119b6d953d94a386b6ad80d277a41637914a464324bd83ec47b8122ab60" exitCode=143 Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.983067 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2","Type":"ContainerDied","Data":"22d0e119b6d953d94a386b6ad80d277a41637914a464324bd83ec47b8122ab60"} Oct 14 07:08:10 crc kubenswrapper[5058]: I1014 07:08:10.983521 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:11 crc kubenswrapper[5058]: I1014 07:08:11.005621 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:11 crc kubenswrapper[5058]: I1014 07:08:11.204979 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:08:11 crc kubenswrapper[5058]: I1014 07:08:11.205586 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-httpd" containerID="cri-o://c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd" gracePeriod=30 Oct 14 07:08:11 crc kubenswrapper[5058]: I1014 07:08:11.205442 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-log" containerID="cri-o://c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070" gracePeriod=30 Oct 14 07:08:11 crc kubenswrapper[5058]: I1014 07:08:11.420234 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:11 crc kubenswrapper[5058]: I1014 07:08:11.993027 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerStarted","Data":"2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5"} Oct 14 07:08:11 crc kubenswrapper[5058]: I1014 07:08:11.994220 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerStarted","Data":"68d8430ef6dc03763678364e3e2f7d60b3f33a9118a304163fc6fb122b1e0bb4"} Oct 14 07:08:11 crc kubenswrapper[5058]: I1014 07:08:11.996050 5058 generic.go:334] "Generic (PLEG): container finished" podID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerID="c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070" exitCode=143 Oct 14 07:08:11 crc kubenswrapper[5058]: I1014 07:08:11.996830 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7775098-57ac-4e54-8b25-ce55dc5d2705","Type":"ContainerDied","Data":"c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070"} Oct 14 07:08:13 crc kubenswrapper[5058]: I1014 07:08:13.005252 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerStarted","Data":"d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be"} Oct 14 07:08:13 crc kubenswrapper[5058]: I1014 07:08:13.610284 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": read tcp 10.217.0.2:44348->10.217.0.149:9292: read: connection reset by peer" Oct 14 07:08:13 crc kubenswrapper[5058]: I1014 07:08:13.610301 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": read tcp 10.217.0.2:44354->10.217.0.149:9292: read: connection reset by peer" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.026828 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerStarted","Data":"15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b"} Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.032958 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sh9kk"] Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.034338 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sh9kk" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.046208 5058 generic.go:334] "Generic (PLEG): container finished" podID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerID="f4c19c60ec6d3a4d8a03bc9568c6fb719e8e7f9d8261834a127d07601bddbda1" exitCode=0 Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.046262 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2","Type":"ContainerDied","Data":"f4c19c60ec6d3a4d8a03bc9568c6fb719e8e7f9d8261834a127d07601bddbda1"} Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.050743 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sh9kk"] Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.061325 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.062763 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.150834 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8cc7\" (UniqueName: \"kubernetes.io/projected/29153d57-226a-44ac-ae33-86127aadf258-kube-api-access-t8cc7\") pod \"nova-api-db-create-sh9kk\" (UID: \"29153d57-226a-44ac-ae33-86127aadf258\") " pod="openstack/nova-api-db-create-sh9kk" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.190619 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.252385 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-scripts\") pod \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.252473 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-config-data\") pod \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.252592 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-httpd-run\") pod \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.252640 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.252685 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-combined-ca-bundle\") pod \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.252709 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bft7g\" (UniqueName: \"kubernetes.io/projected/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-kube-api-access-bft7g\") pod \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.252732 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-internal-tls-certs\") pod \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.252881 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-logs\") pod \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\" (UID: \"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2\") " Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.253631 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8cc7\" (UniqueName: \"kubernetes.io/projected/29153d57-226a-44ac-ae33-86127aadf258-kube-api-access-t8cc7\") pod \"nova-api-db-create-sh9kk\" (UID: \"29153d57-226a-44ac-ae33-86127aadf258\") " pod="openstack/nova-api-db-create-sh9kk" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.265183 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-r4bgw"] Oct 14 07:08:14 crc kubenswrapper[5058]: E1014 07:08:14.266068 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-httpd" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.266104 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-httpd" Oct 14 07:08:14 crc kubenswrapper[5058]: E1014 07:08:14.266141 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-log" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.266147 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-log" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.268396 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-logs" (OuterVolumeSpecName: "logs") pod "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" (UID: "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.269993 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" (UID: "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.270700 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-httpd" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.270735 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" containerName="glance-log" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.271721 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r4bgw" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.274011 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-r4bgw"] Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.281932 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-kube-api-access-bft7g" (OuterVolumeSpecName: "kube-api-access-bft7g") pod "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" (UID: "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2"). InnerVolumeSpecName "kube-api-access-bft7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.286129 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-scripts" (OuterVolumeSpecName: "scripts") pod "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" (UID: "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.291725 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8cc7\" (UniqueName: \"kubernetes.io/projected/29153d57-226a-44ac-ae33-86127aadf258-kube-api-access-t8cc7\") pod \"nova-api-db-create-sh9kk\" (UID: \"29153d57-226a-44ac-ae33-86127aadf258\") " pod="openstack/nova-api-db-create-sh9kk" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.297099 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" (UID: "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.355050 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.355071 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.355079 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.355100 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.355110 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bft7g\" (UniqueName: \"kubernetes.io/projected/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-kube-api-access-bft7g\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.368835 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sh9kk" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.374821 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" (UID: "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.376779 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.376852 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-hcch5"] Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.378107 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hcch5" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.388733 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hcch5"] Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.389364 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:46350->10.217.0.150:9292: read: connection reset by peer" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.389363 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:46346->10.217.0.150:9292: read: connection reset by peer" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.418953 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-config-data" (OuterVolumeSpecName: "config-data") pod "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" (UID: "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.430982 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" (UID: "6472a83d-6e1b-4326-a9f2-8c71c38aa6d2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.458415 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkkjh\" (UniqueName: \"kubernetes.io/projected/220927ac-fb79-4a9d-9b29-d1cd3a48cd4f-kube-api-access-xkkjh\") pod \"nova-cell0-db-create-r4bgw\" (UID: \"220927ac-fb79-4a9d-9b29-d1cd3a48cd4f\") " pod="openstack/nova-cell0-db-create-r4bgw" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.458648 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.458662 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.458671 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.458683 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.559788 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkkjh\" (UniqueName: \"kubernetes.io/projected/220927ac-fb79-4a9d-9b29-d1cd3a48cd4f-kube-api-access-xkkjh\") pod \"nova-cell0-db-create-r4bgw\" (UID: \"220927ac-fb79-4a9d-9b29-d1cd3a48cd4f\") " pod="openstack/nova-cell0-db-create-r4bgw" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.560109 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzsxw\" (UniqueName: \"kubernetes.io/projected/87390a42-eb96-410d-8bca-f77fb2e3ecf9-kube-api-access-nzsxw\") pod \"nova-cell1-db-create-hcch5\" (UID: \"87390a42-eb96-410d-8bca-f77fb2e3ecf9\") " pod="openstack/nova-cell1-db-create-hcch5" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.582603 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkkjh\" (UniqueName: \"kubernetes.io/projected/220927ac-fb79-4a9d-9b29-d1cd3a48cd4f-kube-api-access-xkkjh\") pod \"nova-cell0-db-create-r4bgw\" (UID: \"220927ac-fb79-4a9d-9b29-d1cd3a48cd4f\") " pod="openstack/nova-cell0-db-create-r4bgw" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.662063 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzsxw\" (UniqueName: \"kubernetes.io/projected/87390a42-eb96-410d-8bca-f77fb2e3ecf9-kube-api-access-nzsxw\") pod \"nova-cell1-db-create-hcch5\" (UID: \"87390a42-eb96-410d-8bca-f77fb2e3ecf9\") " pod="openstack/nova-cell1-db-create-hcch5" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.676215 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r4bgw" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.694461 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzsxw\" (UniqueName: \"kubernetes.io/projected/87390a42-eb96-410d-8bca-f77fb2e3ecf9-kube-api-access-nzsxw\") pod \"nova-cell1-db-create-hcch5\" (UID: \"87390a42-eb96-410d-8bca-f77fb2e3ecf9\") " pod="openstack/nova-cell1-db-create-hcch5" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.697155 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hcch5" Oct 14 07:08:14 crc kubenswrapper[5058]: I1014 07:08:14.862063 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sh9kk"] Oct 14 07:08:14 crc kubenswrapper[5058]: W1014 07:08:14.945407 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29153d57_226a_44ac_ae33_86127aadf258.slice/crio-da5a90ac17883077b07f0486063557e80436b2c05312ca866d03c48832e781aa WatchSource:0}: Error finding container da5a90ac17883077b07f0486063557e80436b2c05312ca866d03c48832e781aa: Status 404 returned error can't find the container with id da5a90ac17883077b07f0486063557e80436b2c05312ca866d03c48832e781aa Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.005281 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.069996 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sh9kk" event={"ID":"29153d57-226a-44ac-ae33-86127aadf258","Type":"ContainerStarted","Data":"da5a90ac17883077b07f0486063557e80436b2c05312ca866d03c48832e781aa"} Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.079018 5058 generic.go:334] "Generic (PLEG): container finished" podID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerID="c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd" exitCode=0 Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.079080 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7775098-57ac-4e54-8b25-ce55dc5d2705","Type":"ContainerDied","Data":"c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd"} Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.079107 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7775098-57ac-4e54-8b25-ce55dc5d2705","Type":"ContainerDied","Data":"c91733e14c017bb81f8d77921584b26ac7565ed5c883c08960aade75ba81632c"} Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.079125 5058 scope.go:117] "RemoveContainer" containerID="c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.079260 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.094288 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.094383 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6472a83d-6e1b-4326-a9f2-8c71c38aa6d2","Type":"ContainerDied","Data":"36864e0bee0965c6993bf5930ccc9c33cbbab77ac7fc4ff4d5a49fc0173b4657"} Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.127279 5058 scope.go:117] "RemoveContainer" containerID="c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.133566 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.154410 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.173936 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:08:15 crc kubenswrapper[5058]: E1014 07:08:15.174487 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-httpd" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.174511 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-httpd" Oct 14 07:08:15 crc kubenswrapper[5058]: E1014 07:08:15.174530 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-log" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.174537 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-log" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.174776 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-log" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.174812 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" containerName="glance-httpd" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.175301 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d7775098-57ac-4e54-8b25-ce55dc5d2705\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.175343 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-public-tls-certs\") pod \"d7775098-57ac-4e54-8b25-ce55dc5d2705\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.175378 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-combined-ca-bundle\") pod \"d7775098-57ac-4e54-8b25-ce55dc5d2705\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.175404 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-logs\") pod \"d7775098-57ac-4e54-8b25-ce55dc5d2705\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.175432 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2h8v\" (UniqueName: \"kubernetes.io/projected/d7775098-57ac-4e54-8b25-ce55dc5d2705-kube-api-access-n2h8v\") pod \"d7775098-57ac-4e54-8b25-ce55dc5d2705\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.175463 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-scripts\") pod \"d7775098-57ac-4e54-8b25-ce55dc5d2705\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.175618 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-httpd-run\") pod \"d7775098-57ac-4e54-8b25-ce55dc5d2705\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.175660 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-config-data\") pod \"d7775098-57ac-4e54-8b25-ce55dc5d2705\" (UID: \"d7775098-57ac-4e54-8b25-ce55dc5d2705\") " Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.176534 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-logs" (OuterVolumeSpecName: "logs") pod "d7775098-57ac-4e54-8b25-ce55dc5d2705" (UID: "d7775098-57ac-4e54-8b25-ce55dc5d2705"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.177198 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.178845 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d7775098-57ac-4e54-8b25-ce55dc5d2705" (UID: "d7775098-57ac-4e54-8b25-ce55dc5d2705"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.183393 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.183868 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.187719 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.196720 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7775098-57ac-4e54-8b25-ce55dc5d2705-kube-api-access-n2h8v" (OuterVolumeSpecName: "kube-api-access-n2h8v") pod "d7775098-57ac-4e54-8b25-ce55dc5d2705" (UID: "d7775098-57ac-4e54-8b25-ce55dc5d2705"). InnerVolumeSpecName "kube-api-access-n2h8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.209262 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "d7775098-57ac-4e54-8b25-ce55dc5d2705" (UID: "d7775098-57ac-4e54-8b25-ce55dc5d2705"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.216771 5058 scope.go:117] "RemoveContainer" containerID="c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.217113 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-scripts" (OuterVolumeSpecName: "scripts") pod "d7775098-57ac-4e54-8b25-ce55dc5d2705" (UID: "d7775098-57ac-4e54-8b25-ce55dc5d2705"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:15 crc kubenswrapper[5058]: E1014 07:08:15.218003 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd\": container with ID starting with c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd not found: ID does not exist" containerID="c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.218131 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd"} err="failed to get container status \"c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd\": rpc error: code = NotFound desc = could not find container \"c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd\": container with ID starting with c5d466d3745b4bfa4928b5891ef33b8707e2b15fdb571b73b96ab37f63d0affd not found: ID does not exist" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.218214 5058 scope.go:117] "RemoveContainer" containerID="c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070" Oct 14 07:08:15 crc kubenswrapper[5058]: E1014 07:08:15.221059 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070\": container with ID starting with c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070 not found: ID does not exist" containerID="c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.221113 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070"} err="failed to get container status \"c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070\": rpc error: code = NotFound desc = could not find container \"c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070\": container with ID starting with c82f8965a1ce34d0bed70d13a9018274b2f4d7867baddcee417e1a5741f01070 not found: ID does not exist" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.221136 5058 scope.go:117] "RemoveContainer" containerID="f4c19c60ec6d3a4d8a03bc9568c6fb719e8e7f9d8261834a127d07601bddbda1" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.244620 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7775098-57ac-4e54-8b25-ce55dc5d2705" (UID: "d7775098-57ac-4e54-8b25-ce55dc5d2705"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.267945 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-config-data" (OuterVolumeSpecName: "config-data") pod "d7775098-57ac-4e54-8b25-ce55dc5d2705" (UID: "d7775098-57ac-4e54-8b25-ce55dc5d2705"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.286710 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.286865 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.286887 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d7775098-57ac-4e54-8b25-ce55dc5d2705" (UID: "d7775098-57ac-4e54-8b25-ce55dc5d2705"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.286984 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.287014 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.287190 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.287334 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lhj\" (UniqueName: \"kubernetes.io/projected/61117082-2145-4117-9de9-d283039a6d7d-kube-api-access-77lhj\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.287982 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.288018 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.288193 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.288206 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.288217 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.288226 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.288234 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2h8v\" (UniqueName: \"kubernetes.io/projected/d7775098-57ac-4e54-8b25-ce55dc5d2705-kube-api-access-n2h8v\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.288243 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.288251 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7775098-57ac-4e54-8b25-ce55dc5d2705-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.288262 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7775098-57ac-4e54-8b25-ce55dc5d2705-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.300749 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hcch5"] Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.306516 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.311125 5058 scope.go:117] "RemoveContainer" containerID="22d0e119b6d953d94a386b6ad80d277a41637914a464324bd83ec47b8122ab60" Oct 14 07:08:15 crc kubenswrapper[5058]: W1014 07:08:15.315502 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87390a42_eb96_410d_8bca_f77fb2e3ecf9.slice/crio-56c8443e577bbcbc9cbf725bc3d269ca152e558b68bb5dfb973892655cde2921 WatchSource:0}: Error finding container 56c8443e577bbcbc9cbf725bc3d269ca152e558b68bb5dfb973892655cde2921: Status 404 returned error can't find the container with id 56c8443e577bbcbc9cbf725bc3d269ca152e558b68bb5dfb973892655cde2921 Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.389731 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.389786 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lhj\" (UniqueName: \"kubernetes.io/projected/61117082-2145-4117-9de9-d283039a6d7d-kube-api-access-77lhj\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.389909 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.389929 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.389995 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.390019 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.390033 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.390056 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.390114 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.391129 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.392570 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.393005 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.393374 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.396320 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.408834 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.412280 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.418572 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-r4bgw"] Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.423667 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lhj\" (UniqueName: \"kubernetes.io/projected/61117082-2145-4117-9de9-d283039a6d7d-kube-api-access-77lhj\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.428278 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.438670 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.447371 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.448819 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.451443 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.452654 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.456167 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.482888 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.504033 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.592553 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.592909 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.593014 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-logs\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.593037 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpwdx\" (UniqueName: \"kubernetes.io/projected/0ae7394b-1174-4a56-96d5-7fe0598e1343-kube-api-access-kpwdx\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.593061 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-scripts\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.593090 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.593111 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.593140 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-config-data\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.695069 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.695128 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.695173 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-config-data\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.695218 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.695243 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.695368 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-logs\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.695417 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpwdx\" (UniqueName: \"kubernetes.io/projected/0ae7394b-1174-4a56-96d5-7fe0598e1343-kube-api-access-kpwdx\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.695453 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-scripts\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.699406 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.699551 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.699689 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-logs\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.702068 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.709635 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-scripts\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.712818 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-config-data\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.719467 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpwdx\" (UniqueName: \"kubernetes.io/projected/0ae7394b-1174-4a56-96d5-7fe0598e1343-kube-api-access-kpwdx\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.723426 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:15 crc kubenswrapper[5058]: I1014 07:08:15.781422 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " pod="openstack/glance-default-external-api-0" Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.078816 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.120183 5058 generic.go:334] "Generic (PLEG): container finished" podID="29153d57-226a-44ac-ae33-86127aadf258" containerID="2c1ca9810eab577f6b8f32d0611934c89232c17f52ff78d8d3748f5941c6275f" exitCode=0 Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.120238 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sh9kk" event={"ID":"29153d57-226a-44ac-ae33-86127aadf258","Type":"ContainerDied","Data":"2c1ca9810eab577f6b8f32d0611934c89232c17f52ff78d8d3748f5941c6275f"} Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.121716 5058 generic.go:334] "Generic (PLEG): container finished" podID="220927ac-fb79-4a9d-9b29-d1cd3a48cd4f" containerID="47bfcc71f584a9f3235d3d41f319f1ae35c0b61c540b1ac3eb73120782132327" exitCode=0 Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.121750 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-r4bgw" event={"ID":"220927ac-fb79-4a9d-9b29-d1cd3a48cd4f","Type":"ContainerDied","Data":"47bfcc71f584a9f3235d3d41f319f1ae35c0b61c540b1ac3eb73120782132327"} Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.121764 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-r4bgw" event={"ID":"220927ac-fb79-4a9d-9b29-d1cd3a48cd4f","Type":"ContainerStarted","Data":"118a6466e7c7b62177aa2bf0cfa49947b8384774de5eb153d1e7688ac7e3f79a"} Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.148914 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerStarted","Data":"788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243"} Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.149055 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="ceilometer-central-agent" containerID="cri-o://2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5" gracePeriod=30 Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.149261 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.149302 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="proxy-httpd" containerID="cri-o://788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243" gracePeriod=30 Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.149341 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="sg-core" containerID="cri-o://15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b" gracePeriod=30 Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.149369 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="ceilometer-notification-agent" containerID="cri-o://d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be" gracePeriod=30 Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.158601 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.165773 5058 generic.go:334] "Generic (PLEG): container finished" podID="87390a42-eb96-410d-8bca-f77fb2e3ecf9" containerID="b237b52ff6889669f24b31456f7208fe61fe597bad92f2b9b63ac597254f4eaf" exitCode=0 Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.165891 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hcch5" event={"ID":"87390a42-eb96-410d-8bca-f77fb2e3ecf9","Type":"ContainerDied","Data":"b237b52ff6889669f24b31456f7208fe61fe597bad92f2b9b63ac597254f4eaf"} Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.165944 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hcch5" event={"ID":"87390a42-eb96-410d-8bca-f77fb2e3ecf9","Type":"ContainerStarted","Data":"56c8443e577bbcbc9cbf725bc3d269ca152e558b68bb5dfb973892655cde2921"} Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.183999 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.52558249 podStartE2EDuration="6.183984556s" podCreationTimestamp="2025-10-14 07:08:10 +0000 UTC" firstStartedPulling="2025-10-14 07:08:11.01550275 +0000 UTC m=+1238.926586556" lastFinishedPulling="2025-10-14 07:08:14.673904826 +0000 UTC m=+1242.584988622" observedRunningTime="2025-10-14 07:08:16.174357242 +0000 UTC m=+1244.085441048" watchObservedRunningTime="2025-10-14 07:08:16.183984556 +0000 UTC m=+1244.095068362" Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.498726 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:08:16 crc kubenswrapper[5058]: W1014 07:08:16.503215 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae7394b_1174_4a56_96d5_7fe0598e1343.slice/crio-a5e9dba2743e91483ac79b661230c7330917a64b1d06bb5718e40319b2cb03b3 WatchSource:0}: Error finding container a5e9dba2743e91483ac79b661230c7330917a64b1d06bb5718e40319b2cb03b3: Status 404 returned error can't find the container with id a5e9dba2743e91483ac79b661230c7330917a64b1d06bb5718e40319b2cb03b3 Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.802545 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6472a83d-6e1b-4326-a9f2-8c71c38aa6d2" path="/var/lib/kubelet/pods/6472a83d-6e1b-4326-a9f2-8c71c38aa6d2/volumes" Oct 14 07:08:16 crc kubenswrapper[5058]: I1014 07:08:16.803987 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7775098-57ac-4e54-8b25-ce55dc5d2705" path="/var/lib/kubelet/pods/d7775098-57ac-4e54-8b25-ce55dc5d2705/volumes" Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.184934 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ae7394b-1174-4a56-96d5-7fe0598e1343","Type":"ContainerStarted","Data":"a1c6b82cefdf4862cb066b04eb626d0455c0c4fe043784013c88eec253145cfa"} Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.184984 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ae7394b-1174-4a56-96d5-7fe0598e1343","Type":"ContainerStarted","Data":"a5e9dba2743e91483ac79b661230c7330917a64b1d06bb5718e40319b2cb03b3"} Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.188518 5058 generic.go:334] "Generic (PLEG): container finished" podID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerID="788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243" exitCode=0 Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.188549 5058 generic.go:334] "Generic (PLEG): container finished" podID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerID="15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b" exitCode=2 Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.188557 5058 generic.go:334] "Generic (PLEG): container finished" podID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerID="d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be" exitCode=0 Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.188594 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerDied","Data":"788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243"} Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.188620 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerDied","Data":"15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b"} Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.188630 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerDied","Data":"d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be"} Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.191078 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61117082-2145-4117-9de9-d283039a6d7d","Type":"ContainerStarted","Data":"3dd806326c4ebc88f937e535e0184dd39275bf7daaa1cdd5ccba019323c49240"} Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.191110 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61117082-2145-4117-9de9-d283039a6d7d","Type":"ContainerStarted","Data":"b21b34e3ed034e80d1b5555bc955e6172e57b302e09950a381fb8622c972d702"} Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.540197 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hcch5" Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.643821 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzsxw\" (UniqueName: \"kubernetes.io/projected/87390a42-eb96-410d-8bca-f77fb2e3ecf9-kube-api-access-nzsxw\") pod \"87390a42-eb96-410d-8bca-f77fb2e3ecf9\" (UID: \"87390a42-eb96-410d-8bca-f77fb2e3ecf9\") " Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.649642 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87390a42-eb96-410d-8bca-f77fb2e3ecf9-kube-api-access-nzsxw" (OuterVolumeSpecName: "kube-api-access-nzsxw") pod "87390a42-eb96-410d-8bca-f77fb2e3ecf9" (UID: "87390a42-eb96-410d-8bca-f77fb2e3ecf9"). InnerVolumeSpecName "kube-api-access-nzsxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.698095 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sh9kk" Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.709372 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r4bgw" Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.745482 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzsxw\" (UniqueName: \"kubernetes.io/projected/87390a42-eb96-410d-8bca-f77fb2e3ecf9-kube-api-access-nzsxw\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.846951 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8cc7\" (UniqueName: \"kubernetes.io/projected/29153d57-226a-44ac-ae33-86127aadf258-kube-api-access-t8cc7\") pod \"29153d57-226a-44ac-ae33-86127aadf258\" (UID: \"29153d57-226a-44ac-ae33-86127aadf258\") " Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.847033 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkkjh\" (UniqueName: \"kubernetes.io/projected/220927ac-fb79-4a9d-9b29-d1cd3a48cd4f-kube-api-access-xkkjh\") pod \"220927ac-fb79-4a9d-9b29-d1cd3a48cd4f\" (UID: \"220927ac-fb79-4a9d-9b29-d1cd3a48cd4f\") " Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.854040 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/220927ac-fb79-4a9d-9b29-d1cd3a48cd4f-kube-api-access-xkkjh" (OuterVolumeSpecName: "kube-api-access-xkkjh") pod "220927ac-fb79-4a9d-9b29-d1cd3a48cd4f" (UID: "220927ac-fb79-4a9d-9b29-d1cd3a48cd4f"). InnerVolumeSpecName "kube-api-access-xkkjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.863959 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29153d57-226a-44ac-ae33-86127aadf258-kube-api-access-t8cc7" (OuterVolumeSpecName: "kube-api-access-t8cc7") pod "29153d57-226a-44ac-ae33-86127aadf258" (UID: "29153d57-226a-44ac-ae33-86127aadf258"). InnerVolumeSpecName "kube-api-access-t8cc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.949416 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8cc7\" (UniqueName: \"kubernetes.io/projected/29153d57-226a-44ac-ae33-86127aadf258-kube-api-access-t8cc7\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:17 crc kubenswrapper[5058]: I1014 07:08:17.949452 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkkjh\" (UniqueName: \"kubernetes.io/projected/220927ac-fb79-4a9d-9b29-d1cd3a48cd4f-kube-api-access-xkkjh\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.201516 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sh9kk" event={"ID":"29153d57-226a-44ac-ae33-86127aadf258","Type":"ContainerDied","Data":"da5a90ac17883077b07f0486063557e80436b2c05312ca866d03c48832e781aa"} Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.201557 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da5a90ac17883077b07f0486063557e80436b2c05312ca866d03c48832e781aa" Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.201567 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sh9kk" Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.212777 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ae7394b-1174-4a56-96d5-7fe0598e1343","Type":"ContainerStarted","Data":"6f88404a55530828d3f110f54a67b95cc6d0025e7691e4c253f22716380239e4"} Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.214712 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-r4bgw" event={"ID":"220927ac-fb79-4a9d-9b29-d1cd3a48cd4f","Type":"ContainerDied","Data":"118a6466e7c7b62177aa2bf0cfa49947b8384774de5eb153d1e7688ac7e3f79a"} Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.214749 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="118a6466e7c7b62177aa2bf0cfa49947b8384774de5eb153d1e7688ac7e3f79a" Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.214810 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r4bgw" Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.218083 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61117082-2145-4117-9de9-d283039a6d7d","Type":"ContainerStarted","Data":"ddf55f3c2bbe35fdc2ab51be906a947fa2e2fdff71a5c9edee6623ab01efa4ca"} Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.221122 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hcch5" event={"ID":"87390a42-eb96-410d-8bca-f77fb2e3ecf9","Type":"ContainerDied","Data":"56c8443e577bbcbc9cbf725bc3d269ca152e558b68bb5dfb973892655cde2921"} Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.221165 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c8443e577bbcbc9cbf725bc3d269ca152e558b68bb5dfb973892655cde2921" Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.221198 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hcch5" Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.238623 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.238604529 podStartE2EDuration="3.238604529s" podCreationTimestamp="2025-10-14 07:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:08:18.234333668 +0000 UTC m=+1246.145417504" watchObservedRunningTime="2025-10-14 07:08:18.238604529 +0000 UTC m=+1246.149688325" Oct 14 07:08:18 crc kubenswrapper[5058]: I1014 07:08:18.275747 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.275729054 podStartE2EDuration="3.275729054s" podCreationTimestamp="2025-10-14 07:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:08:18.265193955 +0000 UTC m=+1246.176277761" watchObservedRunningTime="2025-10-14 07:08:18.275729054 +0000 UTC m=+1246.186812860" Oct 14 07:08:20 crc kubenswrapper[5058]: I1014 07:08:20.975708 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.046320 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-combined-ca-bundle\") pod \"14da58d5-5676-4c24-8893-d9ab35c81a69\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.046481 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-scripts\") pod \"14da58d5-5676-4c24-8893-d9ab35c81a69\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.046514 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-config-data\") pod \"14da58d5-5676-4c24-8893-d9ab35c81a69\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.046563 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-sg-core-conf-yaml\") pod \"14da58d5-5676-4c24-8893-d9ab35c81a69\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.046617 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-log-httpd\") pod \"14da58d5-5676-4c24-8893-d9ab35c81a69\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.046725 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6vgh\" (UniqueName: \"kubernetes.io/projected/14da58d5-5676-4c24-8893-d9ab35c81a69-kube-api-access-f6vgh\") pod \"14da58d5-5676-4c24-8893-d9ab35c81a69\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.046745 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-run-httpd\") pod \"14da58d5-5676-4c24-8893-d9ab35c81a69\" (UID: \"14da58d5-5676-4c24-8893-d9ab35c81a69\") " Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.047471 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "14da58d5-5676-4c24-8893-d9ab35c81a69" (UID: "14da58d5-5676-4c24-8893-d9ab35c81a69"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.047898 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "14da58d5-5676-4c24-8893-d9ab35c81a69" (UID: "14da58d5-5676-4c24-8893-d9ab35c81a69"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.060563 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-scripts" (OuterVolumeSpecName: "scripts") pod "14da58d5-5676-4c24-8893-d9ab35c81a69" (UID: "14da58d5-5676-4c24-8893-d9ab35c81a69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.065002 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14da58d5-5676-4c24-8893-d9ab35c81a69-kube-api-access-f6vgh" (OuterVolumeSpecName: "kube-api-access-f6vgh") pod "14da58d5-5676-4c24-8893-d9ab35c81a69" (UID: "14da58d5-5676-4c24-8893-d9ab35c81a69"). InnerVolumeSpecName "kube-api-access-f6vgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.090342 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "14da58d5-5676-4c24-8893-d9ab35c81a69" (UID: "14da58d5-5676-4c24-8893-d9ab35c81a69"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.131131 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14da58d5-5676-4c24-8893-d9ab35c81a69" (UID: "14da58d5-5676-4c24-8893-d9ab35c81a69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.148450 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.148480 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6vgh\" (UniqueName: \"kubernetes.io/projected/14da58d5-5676-4c24-8893-d9ab35c81a69-kube-api-access-f6vgh\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.148489 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.148551 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.148560 5058 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.148569 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14da58d5-5676-4c24-8893-d9ab35c81a69-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.200015 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-config-data" (OuterVolumeSpecName: "config-data") pod "14da58d5-5676-4c24-8893-d9ab35c81a69" (UID: "14da58d5-5676-4c24-8893-d9ab35c81a69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.250831 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14da58d5-5676-4c24-8893-d9ab35c81a69-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.253353 5058 generic.go:334] "Generic (PLEG): container finished" podID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerID="2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5" exitCode=0 Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.253404 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerDied","Data":"2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5"} Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.253436 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14da58d5-5676-4c24-8893-d9ab35c81a69","Type":"ContainerDied","Data":"68d8430ef6dc03763678364e3e2f7d60b3f33a9118a304163fc6fb122b1e0bb4"} Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.253457 5058 scope.go:117] "RemoveContainer" containerID="788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.253487 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.285848 5058 scope.go:117] "RemoveContainer" containerID="15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.314417 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.324546 5058 scope.go:117] "RemoveContainer" containerID="d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.329314 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.340885 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.341471 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220927ac-fb79-4a9d-9b29-d1cd3a48cd4f" containerName="mariadb-database-create" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.341492 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="220927ac-fb79-4a9d-9b29-d1cd3a48cd4f" containerName="mariadb-database-create" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.341522 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="ceilometer-notification-agent" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.341535 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="ceilometer-notification-agent" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.341561 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87390a42-eb96-410d-8bca-f77fb2e3ecf9" containerName="mariadb-database-create" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.341574 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="87390a42-eb96-410d-8bca-f77fb2e3ecf9" containerName="mariadb-database-create" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.341604 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29153d57-226a-44ac-ae33-86127aadf258" containerName="mariadb-database-create" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.341617 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="29153d57-226a-44ac-ae33-86127aadf258" containerName="mariadb-database-create" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.341645 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="ceilometer-central-agent" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.341657 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="ceilometer-central-agent" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.341689 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="sg-core" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.341702 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="sg-core" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.341730 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="proxy-httpd" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.341741 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="proxy-httpd" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.342058 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="29153d57-226a-44ac-ae33-86127aadf258" containerName="mariadb-database-create" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.342089 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="ceilometer-central-agent" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.342114 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="proxy-httpd" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.342144 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="87390a42-eb96-410d-8bca-f77fb2e3ecf9" containerName="mariadb-database-create" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.342163 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="ceilometer-notification-agent" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.342191 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" containerName="sg-core" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.342211 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="220927ac-fb79-4a9d-9b29-d1cd3a48cd4f" containerName="mariadb-database-create" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.345291 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.350136 5058 scope.go:117] "RemoveContainer" containerID="2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.350412 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.350614 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.357332 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.409002 5058 scope.go:117] "RemoveContainer" containerID="788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.412588 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243\": container with ID starting with 788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243 not found: ID does not exist" containerID="788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.412623 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243"} err="failed to get container status \"788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243\": rpc error: code = NotFound desc = could not find container \"788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243\": container with ID starting with 788f8b12c740dc2362ed87a2f18812fedb9fdeb6b62c0fe3b3fcf2623b178243 not found: ID does not exist" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.412644 5058 scope.go:117] "RemoveContainer" containerID="15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.413915 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b\": container with ID starting with 15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b not found: ID does not exist" containerID="15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.413940 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b"} err="failed to get container status \"15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b\": rpc error: code = NotFound desc = could not find container \"15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b\": container with ID starting with 15bae52d1a32359a9f4e35aba07603dab3886d7c548712ae30e593a058fb3c1b not found: ID does not exist" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.413954 5058 scope.go:117] "RemoveContainer" containerID="d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.415026 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be\": container with ID starting with d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be not found: ID does not exist" containerID="d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.415049 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be"} err="failed to get container status \"d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be\": rpc error: code = NotFound desc = could not find container \"d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be\": container with ID starting with d090336ff0008aefef809237bf8bc985ac76fd75f685abe35f4681964fe277be not found: ID does not exist" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.415063 5058 scope.go:117] "RemoveContainer" containerID="2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5" Oct 14 07:08:21 crc kubenswrapper[5058]: E1014 07:08:21.416412 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5\": container with ID starting with 2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5 not found: ID does not exist" containerID="2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.416436 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5"} err="failed to get container status \"2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5\": rpc error: code = NotFound desc = could not find container \"2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5\": container with ID starting with 2af030fba1c1ba514470838698b1cbdc172bff7c48b9734105889311b40809a5 not found: ID does not exist" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.455913 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.455965 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-config-data\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.456169 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfl6\" (UniqueName: \"kubernetes.io/projected/bc7211ac-bbb2-4435-a4b7-98e85ead7289-kube-api-access-6pfl6\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.456224 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-log-httpd\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.456271 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.456361 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-run-httpd\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.456475 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-scripts\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.558319 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfl6\" (UniqueName: \"kubernetes.io/projected/bc7211ac-bbb2-4435-a4b7-98e85ead7289-kube-api-access-6pfl6\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.558378 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-log-httpd\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.558442 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.558474 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-run-httpd\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.558506 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-scripts\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.558533 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.558559 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-config-data\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.559608 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-run-httpd\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.559839 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-log-httpd\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.562102 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.562227 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-config-data\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.562850 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.563916 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-scripts\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.576350 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfl6\" (UniqueName: \"kubernetes.io/projected/bc7211ac-bbb2-4435-a4b7-98e85ead7289-kube-api-access-6pfl6\") pod \"ceilometer-0\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " pod="openstack/ceilometer-0" Oct 14 07:08:21 crc kubenswrapper[5058]: I1014 07:08:21.681566 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:22 crc kubenswrapper[5058]: W1014 07:08:22.116566 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7211ac_bbb2_4435_a4b7_98e85ead7289.slice/crio-e01db6a7ad440b6de7baa5c91be00279a8fdbb2dd19da4e409bd2dc466ce97ed WatchSource:0}: Error finding container e01db6a7ad440b6de7baa5c91be00279a8fdbb2dd19da4e409bd2dc466ce97ed: Status 404 returned error can't find the container with id e01db6a7ad440b6de7baa5c91be00279a8fdbb2dd19da4e409bd2dc466ce97ed Oct 14 07:08:22 crc kubenswrapper[5058]: I1014 07:08:22.122401 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:22 crc kubenswrapper[5058]: I1014 07:08:22.273122 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerStarted","Data":"e01db6a7ad440b6de7baa5c91be00279a8fdbb2dd19da4e409bd2dc466ce97ed"} Oct 14 07:08:22 crc kubenswrapper[5058]: I1014 07:08:22.809467 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14da58d5-5676-4c24-8893-d9ab35c81a69" path="/var/lib/kubelet/pods/14da58d5-5676-4c24-8893-d9ab35c81a69/volumes" Oct 14 07:08:23 crc kubenswrapper[5058]: I1014 07:08:23.306647 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerStarted","Data":"68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7"} Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.082110 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d8bf-account-create-56zf4"] Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.084212 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d8bf-account-create-56zf4" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.087156 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.107582 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d8bf-account-create-56zf4"] Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.221602 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vk5b\" (UniqueName: \"kubernetes.io/projected/edc31907-c72f-4c4d-af0a-45e15e5bcca9-kube-api-access-4vk5b\") pod \"nova-api-d8bf-account-create-56zf4\" (UID: \"edc31907-c72f-4c4d-af0a-45e15e5bcca9\") " pod="openstack/nova-api-d8bf-account-create-56zf4" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.269688 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d813-account-create-ck5bl"] Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.271074 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d813-account-create-ck5bl" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.290548 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.298618 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d813-account-create-ck5bl"] Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.316441 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerStarted","Data":"6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad"} Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.327950 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vk5b\" (UniqueName: \"kubernetes.io/projected/edc31907-c72f-4c4d-af0a-45e15e5bcca9-kube-api-access-4vk5b\") pod \"nova-api-d8bf-account-create-56zf4\" (UID: \"edc31907-c72f-4c4d-af0a-45e15e5bcca9\") " pod="openstack/nova-api-d8bf-account-create-56zf4" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.355010 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vk5b\" (UniqueName: \"kubernetes.io/projected/edc31907-c72f-4c4d-af0a-45e15e5bcca9-kube-api-access-4vk5b\") pod \"nova-api-d8bf-account-create-56zf4\" (UID: \"edc31907-c72f-4c4d-af0a-45e15e5bcca9\") " pod="openstack/nova-api-d8bf-account-create-56zf4" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.412416 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d8bf-account-create-56zf4" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.429049 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtr4\" (UniqueName: \"kubernetes.io/projected/37a02c87-e97c-437f-8653-a2fd34b32b04-kube-api-access-kgtr4\") pod \"nova-cell0-d813-account-create-ck5bl\" (UID: \"37a02c87-e97c-437f-8653-a2fd34b32b04\") " pod="openstack/nova-cell0-d813-account-create-ck5bl" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.481524 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-aaa5-account-create-6v9dn"] Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.482601 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aaa5-account-create-6v9dn" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.497039 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.497078 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-aaa5-account-create-6v9dn"] Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.531204 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgtr4\" (UniqueName: \"kubernetes.io/projected/37a02c87-e97c-437f-8653-a2fd34b32b04-kube-api-access-kgtr4\") pod \"nova-cell0-d813-account-create-ck5bl\" (UID: \"37a02c87-e97c-437f-8653-a2fd34b32b04\") " pod="openstack/nova-cell0-d813-account-create-ck5bl" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.552864 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgtr4\" (UniqueName: \"kubernetes.io/projected/37a02c87-e97c-437f-8653-a2fd34b32b04-kube-api-access-kgtr4\") pod \"nova-cell0-d813-account-create-ck5bl\" (UID: \"37a02c87-e97c-437f-8653-a2fd34b32b04\") " pod="openstack/nova-cell0-d813-account-create-ck5bl" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.605675 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d813-account-create-ck5bl" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.642673 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b4fz\" (UniqueName: \"kubernetes.io/projected/30de89dc-6c60-4277-8669-d4441385cc1f-kube-api-access-4b4fz\") pod \"nova-cell1-aaa5-account-create-6v9dn\" (UID: \"30de89dc-6c60-4277-8669-d4441385cc1f\") " pod="openstack/nova-cell1-aaa5-account-create-6v9dn" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.746597 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b4fz\" (UniqueName: \"kubernetes.io/projected/30de89dc-6c60-4277-8669-d4441385cc1f-kube-api-access-4b4fz\") pod \"nova-cell1-aaa5-account-create-6v9dn\" (UID: \"30de89dc-6c60-4277-8669-d4441385cc1f\") " pod="openstack/nova-cell1-aaa5-account-create-6v9dn" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.769250 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b4fz\" (UniqueName: \"kubernetes.io/projected/30de89dc-6c60-4277-8669-d4441385cc1f-kube-api-access-4b4fz\") pod \"nova-cell1-aaa5-account-create-6v9dn\" (UID: \"30de89dc-6c60-4277-8669-d4441385cc1f\") " pod="openstack/nova-cell1-aaa5-account-create-6v9dn" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.835487 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aaa5-account-create-6v9dn" Oct 14 07:08:24 crc kubenswrapper[5058]: I1014 07:08:24.987141 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d8bf-account-create-56zf4"] Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.184618 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d813-account-create-ck5bl"] Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.316993 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-aaa5-account-create-6v9dn"] Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.324740 5058 generic.go:334] "Generic (PLEG): container finished" podID="edc31907-c72f-4c4d-af0a-45e15e5bcca9" containerID="609b59e6d612b452795141ff2cf05dd02e639afa347efe89928750c54c555f15" exitCode=0 Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.324832 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d8bf-account-create-56zf4" event={"ID":"edc31907-c72f-4c4d-af0a-45e15e5bcca9","Type":"ContainerDied","Data":"609b59e6d612b452795141ff2cf05dd02e639afa347efe89928750c54c555f15"} Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.324888 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d8bf-account-create-56zf4" event={"ID":"edc31907-c72f-4c4d-af0a-45e15e5bcca9","Type":"ContainerStarted","Data":"6cc4d7c73351dee05e94c5d9ff1c50365e515993d9e95650c6bf9543a08ece9c"} Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.326864 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerStarted","Data":"edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448"} Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.328171 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d813-account-create-ck5bl" event={"ID":"37a02c87-e97c-437f-8653-a2fd34b32b04","Type":"ContainerStarted","Data":"94d9dbe5a95317cc23cdc676838184759136112c85cf3363f1e7a04845520863"} Oct 14 07:08:25 crc kubenswrapper[5058]: W1014 07:08:25.367118 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30de89dc_6c60_4277_8669_d4441385cc1f.slice/crio-91428a40baf0df131b827a5bcfbaec7b7767d1a66c2e08e78608f7173ea0dcc1 WatchSource:0}: Error finding container 91428a40baf0df131b827a5bcfbaec7b7767d1a66c2e08e78608f7173ea0dcc1: Status 404 returned error can't find the container with id 91428a40baf0df131b827a5bcfbaec7b7767d1a66c2e08e78608f7173ea0dcc1 Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.505041 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.505096 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.545144 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:25 crc kubenswrapper[5058]: I1014 07:08:25.552456 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.079628 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.080221 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.122482 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.126771 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.338969 5058 generic.go:334] "Generic (PLEG): container finished" podID="30de89dc-6c60-4277-8669-d4441385cc1f" containerID="092714e5ca8fc048bf747f1e11af7e58ef585d7d510fd82008a0a96a1b8da12e" exitCode=0 Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.339073 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aaa5-account-create-6v9dn" event={"ID":"30de89dc-6c60-4277-8669-d4441385cc1f","Type":"ContainerDied","Data":"092714e5ca8fc048bf747f1e11af7e58ef585d7d510fd82008a0a96a1b8da12e"} Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.339117 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aaa5-account-create-6v9dn" event={"ID":"30de89dc-6c60-4277-8669-d4441385cc1f","Type":"ContainerStarted","Data":"91428a40baf0df131b827a5bcfbaec7b7767d1a66c2e08e78608f7173ea0dcc1"} Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.343954 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerStarted","Data":"fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87"} Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.344001 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.346699 5058 generic.go:334] "Generic (PLEG): container finished" podID="37a02c87-e97c-437f-8653-a2fd34b32b04" containerID="a2c72a0b9b3c3872272f34176a0ed5176dff60e9f134143e49938e8d99253ac2" exitCode=0 Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.348312 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d813-account-create-ck5bl" event={"ID":"37a02c87-e97c-437f-8653-a2fd34b32b04","Type":"ContainerDied","Data":"a2c72a0b9b3c3872272f34176a0ed5176dff60e9f134143e49938e8d99253ac2"} Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.348344 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.349643 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.349667 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.349677 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.416624 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.834563367 podStartE2EDuration="5.416600583s" podCreationTimestamp="2025-10-14 07:08:21 +0000 UTC" firstStartedPulling="2025-10-14 07:08:22.119335343 +0000 UTC m=+1250.030419149" lastFinishedPulling="2025-10-14 07:08:25.701372559 +0000 UTC m=+1253.612456365" observedRunningTime="2025-10-14 07:08:26.401253076 +0000 UTC m=+1254.312336902" watchObservedRunningTime="2025-10-14 07:08:26.416600583 +0000 UTC m=+1254.327684389" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.781293 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d8bf-account-create-56zf4" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.891213 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vk5b\" (UniqueName: \"kubernetes.io/projected/edc31907-c72f-4c4d-af0a-45e15e5bcca9-kube-api-access-4vk5b\") pod \"edc31907-c72f-4c4d-af0a-45e15e5bcca9\" (UID: \"edc31907-c72f-4c4d-af0a-45e15e5bcca9\") " Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.913101 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc31907-c72f-4c4d-af0a-45e15e5bcca9-kube-api-access-4vk5b" (OuterVolumeSpecName: "kube-api-access-4vk5b") pod "edc31907-c72f-4c4d-af0a-45e15e5bcca9" (UID: "edc31907-c72f-4c4d-af0a-45e15e5bcca9"). InnerVolumeSpecName "kube-api-access-4vk5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:26 crc kubenswrapper[5058]: I1014 07:08:26.993398 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vk5b\" (UniqueName: \"kubernetes.io/projected/edc31907-c72f-4c4d-af0a-45e15e5bcca9-kube-api-access-4vk5b\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:27 crc kubenswrapper[5058]: I1014 07:08:27.360390 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d8bf-account-create-56zf4" event={"ID":"edc31907-c72f-4c4d-af0a-45e15e5bcca9","Type":"ContainerDied","Data":"6cc4d7c73351dee05e94c5d9ff1c50365e515993d9e95650c6bf9543a08ece9c"} Oct 14 07:08:27 crc kubenswrapper[5058]: I1014 07:08:27.360439 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc4d7c73351dee05e94c5d9ff1c50365e515993d9e95650c6bf9543a08ece9c" Oct 14 07:08:27 crc kubenswrapper[5058]: I1014 07:08:27.360749 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d8bf-account-create-56zf4" Oct 14 07:08:27 crc kubenswrapper[5058]: I1014 07:08:27.744938 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d813-account-create-ck5bl" Oct 14 07:08:27 crc kubenswrapper[5058]: I1014 07:08:27.830735 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgtr4\" (UniqueName: \"kubernetes.io/projected/37a02c87-e97c-437f-8653-a2fd34b32b04-kube-api-access-kgtr4\") pod \"37a02c87-e97c-437f-8653-a2fd34b32b04\" (UID: \"37a02c87-e97c-437f-8653-a2fd34b32b04\") " Oct 14 07:08:27 crc kubenswrapper[5058]: I1014 07:08:27.836035 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a02c87-e97c-437f-8653-a2fd34b32b04-kube-api-access-kgtr4" (OuterVolumeSpecName: "kube-api-access-kgtr4") pod "37a02c87-e97c-437f-8653-a2fd34b32b04" (UID: "37a02c87-e97c-437f-8653-a2fd34b32b04"). InnerVolumeSpecName "kube-api-access-kgtr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:27 crc kubenswrapper[5058]: I1014 07:08:27.910059 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aaa5-account-create-6v9dn" Oct 14 07:08:27 crc kubenswrapper[5058]: I1014 07:08:27.932896 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgtr4\" (UniqueName: \"kubernetes.io/projected/37a02c87-e97c-437f-8653-a2fd34b32b04-kube-api-access-kgtr4\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.033457 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b4fz\" (UniqueName: \"kubernetes.io/projected/30de89dc-6c60-4277-8669-d4441385cc1f-kube-api-access-4b4fz\") pod \"30de89dc-6c60-4277-8669-d4441385cc1f\" (UID: \"30de89dc-6c60-4277-8669-d4441385cc1f\") " Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.039980 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30de89dc-6c60-4277-8669-d4441385cc1f-kube-api-access-4b4fz" (OuterVolumeSpecName: "kube-api-access-4b4fz") pod "30de89dc-6c60-4277-8669-d4441385cc1f" (UID: "30de89dc-6c60-4277-8669-d4441385cc1f"). InnerVolumeSpecName "kube-api-access-4b4fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.135997 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b4fz\" (UniqueName: \"kubernetes.io/projected/30de89dc-6c60-4277-8669-d4441385cc1f-kube-api-access-4b4fz\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.246571 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.330490 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.370019 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d813-account-create-ck5bl" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.375883 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d813-account-create-ck5bl" event={"ID":"37a02c87-e97c-437f-8653-a2fd34b32b04","Type":"ContainerDied","Data":"94d9dbe5a95317cc23cdc676838184759136112c85cf3363f1e7a04845520863"} Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.375943 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94d9dbe5a95317cc23cdc676838184759136112c85cf3363f1e7a04845520863" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.377551 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.377575 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.377758 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aaa5-account-create-6v9dn" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.377787 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aaa5-account-create-6v9dn" event={"ID":"30de89dc-6c60-4277-8669-d4441385cc1f","Type":"ContainerDied","Data":"91428a40baf0df131b827a5bcfbaec7b7767d1a66c2e08e78608f7173ea0dcc1"} Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.377991 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91428a40baf0df131b827a5bcfbaec7b7767d1a66c2e08e78608f7173ea0dcc1" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.453389 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 07:08:28 crc kubenswrapper[5058]: I1014 07:08:28.973974 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.476982 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vqb94"] Oct 14 07:08:29 crc kubenswrapper[5058]: E1014 07:08:29.477531 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30de89dc-6c60-4277-8669-d4441385cc1f" containerName="mariadb-account-create" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.477543 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="30de89dc-6c60-4277-8669-d4441385cc1f" containerName="mariadb-account-create" Oct 14 07:08:29 crc kubenswrapper[5058]: E1014 07:08:29.477577 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc31907-c72f-4c4d-af0a-45e15e5bcca9" containerName="mariadb-account-create" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.477583 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc31907-c72f-4c4d-af0a-45e15e5bcca9" containerName="mariadb-account-create" Oct 14 07:08:29 crc kubenswrapper[5058]: E1014 07:08:29.477593 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a02c87-e97c-437f-8653-a2fd34b32b04" containerName="mariadb-account-create" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.477599 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a02c87-e97c-437f-8653-a2fd34b32b04" containerName="mariadb-account-create" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.477757 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="30de89dc-6c60-4277-8669-d4441385cc1f" containerName="mariadb-account-create" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.477768 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc31907-c72f-4c4d-af0a-45e15e5bcca9" containerName="mariadb-account-create" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.477785 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a02c87-e97c-437f-8653-a2fd34b32b04" containerName="mariadb-account-create" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.478314 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.480221 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.480652 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.481052 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w5txc" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.492652 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vqb94"] Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.661777 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqrg5\" (UniqueName: \"kubernetes.io/projected/75b09a00-f000-4bcc-ab28-07699c6cbeb5-kube-api-access-xqrg5\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.662124 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-config-data\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.662284 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-scripts\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.662537 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.764748 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqrg5\" (UniqueName: \"kubernetes.io/projected/75b09a00-f000-4bcc-ab28-07699c6cbeb5-kube-api-access-xqrg5\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.765104 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-config-data\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.765227 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-scripts\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.765334 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.771744 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-scripts\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.773272 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.787918 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-config-data\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.790076 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqrg5\" (UniqueName: \"kubernetes.io/projected/75b09a00-f000-4bcc-ab28-07699c6cbeb5-kube-api-access-xqrg5\") pod \"nova-cell0-conductor-db-sync-vqb94\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:29 crc kubenswrapper[5058]: I1014 07:08:29.832032 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:30 crc kubenswrapper[5058]: I1014 07:08:30.332981 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vqb94"] Oct 14 07:08:30 crc kubenswrapper[5058]: I1014 07:08:30.394810 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vqb94" event={"ID":"75b09a00-f000-4bcc-ab28-07699c6cbeb5","Type":"ContainerStarted","Data":"800823f71ebc27693a468b5e5727e493bb62a91cb73840ef92a335925be3c64a"} Oct 14 07:08:32 crc kubenswrapper[5058]: I1014 07:08:32.254439 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:32 crc kubenswrapper[5058]: I1014 07:08:32.255012 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="ceilometer-central-agent" containerID="cri-o://68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7" gracePeriod=30 Oct 14 07:08:32 crc kubenswrapper[5058]: I1014 07:08:32.255122 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="ceilometer-notification-agent" containerID="cri-o://6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad" gracePeriod=30 Oct 14 07:08:32 crc kubenswrapper[5058]: I1014 07:08:32.255086 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="proxy-httpd" containerID="cri-o://fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87" gracePeriod=30 Oct 14 07:08:32 crc kubenswrapper[5058]: I1014 07:08:32.255102 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="sg-core" containerID="cri-o://edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448" gracePeriod=30 Oct 14 07:08:32 crc kubenswrapper[5058]: I1014 07:08:32.421015 5058 generic.go:334] "Generic (PLEG): container finished" podID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerID="fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87" exitCode=0 Oct 14 07:08:32 crc kubenswrapper[5058]: I1014 07:08:32.421294 5058 generic.go:334] "Generic (PLEG): container finished" podID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerID="edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448" exitCode=2 Oct 14 07:08:32 crc kubenswrapper[5058]: I1014 07:08:32.421078 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerDied","Data":"fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87"} Oct 14 07:08:32 crc kubenswrapper[5058]: I1014 07:08:32.421331 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerDied","Data":"edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448"} Oct 14 07:08:32 crc kubenswrapper[5058]: E1014 07:08:32.685146 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7211ac_bbb2_4435_a4b7_98e85ead7289.slice/crio-68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7.scope\": RecentStats: unable to find data in memory cache]" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.322812 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.446993 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-scripts\") pod \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.447110 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-log-httpd\") pod \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.447136 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-sg-core-conf-yaml\") pod \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.447207 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-combined-ca-bundle\") pod \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.447260 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-config-data\") pod \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.447295 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-run-httpd\") pod \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.447348 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pfl6\" (UniqueName: \"kubernetes.io/projected/bc7211ac-bbb2-4435-a4b7-98e85ead7289-kube-api-access-6pfl6\") pod \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\" (UID: \"bc7211ac-bbb2-4435-a4b7-98e85ead7289\") " Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.450066 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc7211ac-bbb2-4435-a4b7-98e85ead7289" (UID: "bc7211ac-bbb2-4435-a4b7-98e85ead7289"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.450138 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc7211ac-bbb2-4435-a4b7-98e85ead7289" (UID: "bc7211ac-bbb2-4435-a4b7-98e85ead7289"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.452543 5058 generic.go:334] "Generic (PLEG): container finished" podID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerID="6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad" exitCode=0 Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.452585 5058 generic.go:334] "Generic (PLEG): container finished" podID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerID="68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7" exitCode=0 Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.452615 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerDied","Data":"6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad"} Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.452651 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerDied","Data":"68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7"} Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.452682 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc7211ac-bbb2-4435-a4b7-98e85ead7289","Type":"ContainerDied","Data":"e01db6a7ad440b6de7baa5c91be00279a8fdbb2dd19da4e409bd2dc466ce97ed"} Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.452705 5058 scope.go:117] "RemoveContainer" containerID="fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.452706 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.457748 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-scripts" (OuterVolumeSpecName: "scripts") pod "bc7211ac-bbb2-4435-a4b7-98e85ead7289" (UID: "bc7211ac-bbb2-4435-a4b7-98e85ead7289"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.460785 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7211ac-bbb2-4435-a4b7-98e85ead7289-kube-api-access-6pfl6" (OuterVolumeSpecName: "kube-api-access-6pfl6") pod "bc7211ac-bbb2-4435-a4b7-98e85ead7289" (UID: "bc7211ac-bbb2-4435-a4b7-98e85ead7289"). InnerVolumeSpecName "kube-api-access-6pfl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.510569 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc7211ac-bbb2-4435-a4b7-98e85ead7289" (UID: "bc7211ac-bbb2-4435-a4b7-98e85ead7289"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.533401 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc7211ac-bbb2-4435-a4b7-98e85ead7289" (UID: "bc7211ac-bbb2-4435-a4b7-98e85ead7289"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.550147 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.550175 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.550186 5058 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.550196 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.550206 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc7211ac-bbb2-4435-a4b7-98e85ead7289-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.550214 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pfl6\" (UniqueName: \"kubernetes.io/projected/bc7211ac-bbb2-4435-a4b7-98e85ead7289-kube-api-access-6pfl6\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.585195 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-config-data" (OuterVolumeSpecName: "config-data") pod "bc7211ac-bbb2-4435-a4b7-98e85ead7289" (UID: "bc7211ac-bbb2-4435-a4b7-98e85ead7289"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.648789 5058 scope.go:117] "RemoveContainer" containerID="edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.651373 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7211ac-bbb2-4435-a4b7-98e85ead7289-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.666851 5058 scope.go:117] "RemoveContainer" containerID="6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.694703 5058 scope.go:117] "RemoveContainer" containerID="68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.727128 5058 scope.go:117] "RemoveContainer" containerID="fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87" Oct 14 07:08:33 crc kubenswrapper[5058]: E1014 07:08:33.727600 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87\": container with ID starting with fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87 not found: ID does not exist" containerID="fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.727648 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87"} err="failed to get container status \"fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87\": rpc error: code = NotFound desc = could not find container \"fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87\": container with ID starting with fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87 not found: ID does not exist" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.727676 5058 scope.go:117] "RemoveContainer" containerID="edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448" Oct 14 07:08:33 crc kubenswrapper[5058]: E1014 07:08:33.728136 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448\": container with ID starting with edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448 not found: ID does not exist" containerID="edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.728167 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448"} err="failed to get container status \"edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448\": rpc error: code = NotFound desc = could not find container \"edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448\": container with ID starting with edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448 not found: ID does not exist" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.728191 5058 scope.go:117] "RemoveContainer" containerID="6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad" Oct 14 07:08:33 crc kubenswrapper[5058]: E1014 07:08:33.728484 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad\": container with ID starting with 6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad not found: ID does not exist" containerID="6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.728521 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad"} err="failed to get container status \"6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad\": rpc error: code = NotFound desc = could not find container \"6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad\": container with ID starting with 6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad not found: ID does not exist" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.728547 5058 scope.go:117] "RemoveContainer" containerID="68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7" Oct 14 07:08:33 crc kubenswrapper[5058]: E1014 07:08:33.728980 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7\": container with ID starting with 68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7 not found: ID does not exist" containerID="68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.729005 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7"} err="failed to get container status \"68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7\": rpc error: code = NotFound desc = could not find container \"68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7\": container with ID starting with 68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7 not found: ID does not exist" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.729021 5058 scope.go:117] "RemoveContainer" containerID="fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.729267 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87"} err="failed to get container status \"fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87\": rpc error: code = NotFound desc = could not find container \"fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87\": container with ID starting with fe4bb4052d169d465a74c8539830ec7573b2977068534296f6a199152497eb87 not found: ID does not exist" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.729286 5058 scope.go:117] "RemoveContainer" containerID="edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.729609 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448"} err="failed to get container status \"edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448\": rpc error: code = NotFound desc = could not find container \"edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448\": container with ID starting with edb6f90a9a420268ed43c54a67cbec2e6759529c8ec3154b8bd01b375bd2c448 not found: ID does not exist" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.729627 5058 scope.go:117] "RemoveContainer" containerID="6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.730020 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad"} err="failed to get container status \"6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad\": rpc error: code = NotFound desc = could not find container \"6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad\": container with ID starting with 6434b02bf35db152434328ff54daa618497551a70ba9ad02a4170fa3b3c840ad not found: ID does not exist" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.730040 5058 scope.go:117] "RemoveContainer" containerID="68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.730313 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7"} err="failed to get container status \"68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7\": rpc error: code = NotFound desc = could not find container \"68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7\": container with ID starting with 68c83a7441aac3b32c0f39e88a898fde96b5211907f4c9b20b7ed5d46c5f68c7 not found: ID does not exist" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.787962 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.795060 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.811719 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:33 crc kubenswrapper[5058]: E1014 07:08:33.812098 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="ceilometer-notification-agent" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.812113 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="ceilometer-notification-agent" Oct 14 07:08:33 crc kubenswrapper[5058]: E1014 07:08:33.812147 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="sg-core" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.812153 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="sg-core" Oct 14 07:08:33 crc kubenswrapper[5058]: E1014 07:08:33.812160 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="ceilometer-central-agent" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.812166 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="ceilometer-central-agent" Oct 14 07:08:33 crc kubenswrapper[5058]: E1014 07:08:33.812178 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="proxy-httpd" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.812184 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="proxy-httpd" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.812330 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="ceilometer-notification-agent" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.812345 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="ceilometer-central-agent" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.812353 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="proxy-httpd" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.812364 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" containerName="sg-core" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.813896 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.816671 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.817003 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.878353 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.979334 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-config-data\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.979652 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-log-httpd\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.980081 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlql\" (UniqueName: \"kubernetes.io/projected/7aafdb52-7751-4dd5-8493-4104ee271e46-kube-api-access-2nlql\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.980227 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.980463 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.980568 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-scripts\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:33 crc kubenswrapper[5058]: I1014 07:08:33.980631 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-run-httpd\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.082308 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.082621 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.082984 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-scripts\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.083082 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-run-httpd\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.083442 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-config-data\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.083522 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-log-httpd\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.083609 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nlql\" (UniqueName: \"kubernetes.io/projected/7aafdb52-7751-4dd5-8493-4104ee271e46-kube-api-access-2nlql\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.083712 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-run-httpd\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.084059 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-log-httpd\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.086432 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.086647 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.087151 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-scripts\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.088405 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-config-data\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.100984 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nlql\" (UniqueName: \"kubernetes.io/projected/7aafdb52-7751-4dd5-8493-4104ee271e46-kube-api-access-2nlql\") pod \"ceilometer-0\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.189365 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.672545 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:34 crc kubenswrapper[5058]: I1014 07:08:34.812393 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7211ac-bbb2-4435-a4b7-98e85ead7289" path="/var/lib/kubelet/pods/bc7211ac-bbb2-4435-a4b7-98e85ead7289/volumes" Oct 14 07:08:38 crc kubenswrapper[5058]: W1014 07:08:38.620035 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7aafdb52_7751_4dd5_8493_4104ee271e46.slice/crio-350c575aadc94d0b658730d78b7834194ff352710a10a90f71a4d342ef199817 WatchSource:0}: Error finding container 350c575aadc94d0b658730d78b7834194ff352710a10a90f71a4d342ef199817: Status 404 returned error can't find the container with id 350c575aadc94d0b658730d78b7834194ff352710a10a90f71a4d342ef199817 Oct 14 07:08:39 crc kubenswrapper[5058]: I1014 07:08:39.514056 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vqb94" event={"ID":"75b09a00-f000-4bcc-ab28-07699c6cbeb5","Type":"ContainerStarted","Data":"89aab696d2006e8237245cc838e2d37a49d39b388091b004bb1bf280d2d579a7"} Oct 14 07:08:39 crc kubenswrapper[5058]: I1014 07:08:39.515633 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerStarted","Data":"350c575aadc94d0b658730d78b7834194ff352710a10a90f71a4d342ef199817"} Oct 14 07:08:39 crc kubenswrapper[5058]: I1014 07:08:39.529252 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vqb94" podStartSLOduration=1.790574482 podStartE2EDuration="10.529230567s" podCreationTimestamp="2025-10-14 07:08:29 +0000 UTC" firstStartedPulling="2025-10-14 07:08:30.33899481 +0000 UTC m=+1258.250078626" lastFinishedPulling="2025-10-14 07:08:39.077650905 +0000 UTC m=+1266.988734711" observedRunningTime="2025-10-14 07:08:39.526653633 +0000 UTC m=+1267.437737459" watchObservedRunningTime="2025-10-14 07:08:39.529230567 +0000 UTC m=+1267.440314373" Oct 14 07:08:40 crc kubenswrapper[5058]: I1014 07:08:40.525950 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerStarted","Data":"0a698bc2780f09a024659e12ec7f6923558b0cdb3a7998eecfe4f00f4e32621a"} Oct 14 07:08:41 crc kubenswrapper[5058]: I1014 07:08:41.416981 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:41 crc kubenswrapper[5058]: I1014 07:08:41.537654 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerStarted","Data":"7d37300c5c1bc6f9f18a6d5bb38e384ecf0958a4685077c621b26ed1234e353f"} Oct 14 07:08:42 crc kubenswrapper[5058]: I1014 07:08:42.548872 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerStarted","Data":"cd578884bee1850ee586e803c4ad25bcd3ba82dfddf59efb36bfffd298595a88"} Oct 14 07:08:43 crc kubenswrapper[5058]: I1014 07:08:43.562229 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerStarted","Data":"32ceaf7be1f5309ece848b516a852782cd5048f73c22cc1088000214df07a6cc"} Oct 14 07:08:43 crc kubenswrapper[5058]: I1014 07:08:43.562699 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="ceilometer-central-agent" containerID="cri-o://0a698bc2780f09a024659e12ec7f6923558b0cdb3a7998eecfe4f00f4e32621a" gracePeriod=30 Oct 14 07:08:43 crc kubenswrapper[5058]: I1014 07:08:43.563005 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 07:08:43 crc kubenswrapper[5058]: I1014 07:08:43.563134 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="proxy-httpd" containerID="cri-o://32ceaf7be1f5309ece848b516a852782cd5048f73c22cc1088000214df07a6cc" gracePeriod=30 Oct 14 07:08:43 crc kubenswrapper[5058]: I1014 07:08:43.563300 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="sg-core" containerID="cri-o://cd578884bee1850ee586e803c4ad25bcd3ba82dfddf59efb36bfffd298595a88" gracePeriod=30 Oct 14 07:08:43 crc kubenswrapper[5058]: I1014 07:08:43.563383 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="ceilometer-notification-agent" containerID="cri-o://7d37300c5c1bc6f9f18a6d5bb38e384ecf0958a4685077c621b26ed1234e353f" gracePeriod=30 Oct 14 07:08:43 crc kubenswrapper[5058]: I1014 07:08:43.600793 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.817058435 podStartE2EDuration="10.600779682s" podCreationTimestamp="2025-10-14 07:08:33 +0000 UTC" firstStartedPulling="2025-10-14 07:08:38.992870665 +0000 UTC m=+1266.903954471" lastFinishedPulling="2025-10-14 07:08:42.776591872 +0000 UTC m=+1270.687675718" observedRunningTime="2025-10-14 07:08:43.595826381 +0000 UTC m=+1271.506910197" watchObservedRunningTime="2025-10-14 07:08:43.600779682 +0000 UTC m=+1271.511863478" Oct 14 07:08:44 crc kubenswrapper[5058]: I1014 07:08:44.576113 5058 generic.go:334] "Generic (PLEG): container finished" podID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerID="32ceaf7be1f5309ece848b516a852782cd5048f73c22cc1088000214df07a6cc" exitCode=0 Oct 14 07:08:44 crc kubenswrapper[5058]: I1014 07:08:44.576477 5058 generic.go:334] "Generic (PLEG): container finished" podID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerID="cd578884bee1850ee586e803c4ad25bcd3ba82dfddf59efb36bfffd298595a88" exitCode=2 Oct 14 07:08:44 crc kubenswrapper[5058]: I1014 07:08:44.576493 5058 generic.go:334] "Generic (PLEG): container finished" podID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerID="7d37300c5c1bc6f9f18a6d5bb38e384ecf0958a4685077c621b26ed1234e353f" exitCode=0 Oct 14 07:08:44 crc kubenswrapper[5058]: I1014 07:08:44.576218 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerDied","Data":"32ceaf7be1f5309ece848b516a852782cd5048f73c22cc1088000214df07a6cc"} Oct 14 07:08:44 crc kubenswrapper[5058]: I1014 07:08:44.576548 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerDied","Data":"cd578884bee1850ee586e803c4ad25bcd3ba82dfddf59efb36bfffd298595a88"} Oct 14 07:08:44 crc kubenswrapper[5058]: I1014 07:08:44.576579 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerDied","Data":"7d37300c5c1bc6f9f18a6d5bb38e384ecf0958a4685077c621b26ed1234e353f"} Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.628079 5058 generic.go:334] "Generic (PLEG): container finished" podID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerID="0a698bc2780f09a024659e12ec7f6923558b0cdb3a7998eecfe4f00f4e32621a" exitCode=0 Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.628682 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerDied","Data":"0a698bc2780f09a024659e12ec7f6923558b0cdb3a7998eecfe4f00f4e32621a"} Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.810515 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.955516 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nlql\" (UniqueName: \"kubernetes.io/projected/7aafdb52-7751-4dd5-8493-4104ee271e46-kube-api-access-2nlql\") pod \"7aafdb52-7751-4dd5-8493-4104ee271e46\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.955596 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-scripts\") pod \"7aafdb52-7751-4dd5-8493-4104ee271e46\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.955772 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-combined-ca-bundle\") pod \"7aafdb52-7751-4dd5-8493-4104ee271e46\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.957454 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-config-data\") pod \"7aafdb52-7751-4dd5-8493-4104ee271e46\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.957487 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-log-httpd\") pod \"7aafdb52-7751-4dd5-8493-4104ee271e46\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.957541 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-sg-core-conf-yaml\") pod \"7aafdb52-7751-4dd5-8493-4104ee271e46\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.957564 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-run-httpd\") pod \"7aafdb52-7751-4dd5-8493-4104ee271e46\" (UID: \"7aafdb52-7751-4dd5-8493-4104ee271e46\") " Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.957994 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7aafdb52-7751-4dd5-8493-4104ee271e46" (UID: "7aafdb52-7751-4dd5-8493-4104ee271e46"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.958223 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.958253 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7aafdb52-7751-4dd5-8493-4104ee271e46" (UID: "7aafdb52-7751-4dd5-8493-4104ee271e46"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.964115 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aafdb52-7751-4dd5-8493-4104ee271e46-kube-api-access-2nlql" (OuterVolumeSpecName: "kube-api-access-2nlql") pod "7aafdb52-7751-4dd5-8493-4104ee271e46" (UID: "7aafdb52-7751-4dd5-8493-4104ee271e46"). InnerVolumeSpecName "kube-api-access-2nlql". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.970077 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-scripts" (OuterVolumeSpecName: "scripts") pod "7aafdb52-7751-4dd5-8493-4104ee271e46" (UID: "7aafdb52-7751-4dd5-8493-4104ee271e46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:46 crc kubenswrapper[5058]: I1014 07:08:46.989276 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7aafdb52-7751-4dd5-8493-4104ee271e46" (UID: "7aafdb52-7751-4dd5-8493-4104ee271e46"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.046548 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7aafdb52-7751-4dd5-8493-4104ee271e46" (UID: "7aafdb52-7751-4dd5-8493-4104ee271e46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.060045 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nlql\" (UniqueName: \"kubernetes.io/projected/7aafdb52-7751-4dd5-8493-4104ee271e46-kube-api-access-2nlql\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.060081 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.060094 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.060104 5058 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.060116 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aafdb52-7751-4dd5-8493-4104ee271e46-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.068272 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-config-data" (OuterVolumeSpecName: "config-data") pod "7aafdb52-7751-4dd5-8493-4104ee271e46" (UID: "7aafdb52-7751-4dd5-8493-4104ee271e46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.163831 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aafdb52-7751-4dd5-8493-4104ee271e46-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.644335 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aafdb52-7751-4dd5-8493-4104ee271e46","Type":"ContainerDied","Data":"350c575aadc94d0b658730d78b7834194ff352710a10a90f71a4d342ef199817"} Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.644694 5058 scope.go:117] "RemoveContainer" containerID="32ceaf7be1f5309ece848b516a852782cd5048f73c22cc1088000214df07a6cc" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.644923 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.683772 5058 scope.go:117] "RemoveContainer" containerID="cd578884bee1850ee586e803c4ad25bcd3ba82dfddf59efb36bfffd298595a88" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.696168 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.717297 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.717365 5058 scope.go:117] "RemoveContainer" containerID="7d37300c5c1bc6f9f18a6d5bb38e384ecf0958a4685077c621b26ed1234e353f" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.726273 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:47 crc kubenswrapper[5058]: E1014 07:08:47.727019 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="ceilometer-notification-agent" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.727036 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="ceilometer-notification-agent" Oct 14 07:08:47 crc kubenswrapper[5058]: E1014 07:08:47.727050 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="sg-core" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.727057 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="sg-core" Oct 14 07:08:47 crc kubenswrapper[5058]: E1014 07:08:47.727078 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="proxy-httpd" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.727086 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="proxy-httpd" Oct 14 07:08:47 crc kubenswrapper[5058]: E1014 07:08:47.727109 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="ceilometer-central-agent" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.727116 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="ceilometer-central-agent" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.727321 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="ceilometer-central-agent" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.727347 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="ceilometer-notification-agent" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.727362 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="sg-core" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.727383 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" containerName="proxy-httpd" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.729484 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.731918 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.732117 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.743118 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.753640 5058 scope.go:117] "RemoveContainer" containerID="0a698bc2780f09a024659e12ec7f6923558b0cdb3a7998eecfe4f00f4e32621a" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.877598 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-scripts\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.877652 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-run-httpd\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.877698 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.877723 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.877774 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-config-data\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.877866 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klph2\" (UniqueName: \"kubernetes.io/projected/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-kube-api-access-klph2\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.877893 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-log-httpd\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.979173 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-run-httpd\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.979275 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.979317 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.979371 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-config-data\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.979413 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klph2\" (UniqueName: \"kubernetes.io/projected/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-kube-api-access-klph2\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.979443 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-log-httpd\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.979508 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-scripts\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.979903 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-run-httpd\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.979931 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-log-httpd\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.985165 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-scripts\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.985383 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.985943 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-config-data\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.993956 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:47 crc kubenswrapper[5058]: I1014 07:08:47.996524 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klph2\" (UniqueName: \"kubernetes.io/projected/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-kube-api-access-klph2\") pod \"ceilometer-0\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " pod="openstack/ceilometer-0" Oct 14 07:08:48 crc kubenswrapper[5058]: I1014 07:08:48.059066 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:08:48 crc kubenswrapper[5058]: I1014 07:08:48.519024 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:08:48 crc kubenswrapper[5058]: W1014 07:08:48.533383 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cdac73f_f4a2_4ce0_a8c0_75912eeb203e.slice/crio-50cd86af4f288a1c38852a7808da708ad5102a6a5c64f02d675ca46d1d0356dd WatchSource:0}: Error finding container 50cd86af4f288a1c38852a7808da708ad5102a6a5c64f02d675ca46d1d0356dd: Status 404 returned error can't find the container with id 50cd86af4f288a1c38852a7808da708ad5102a6a5c64f02d675ca46d1d0356dd Oct 14 07:08:48 crc kubenswrapper[5058]: I1014 07:08:48.657841 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerStarted","Data":"50cd86af4f288a1c38852a7808da708ad5102a6a5c64f02d675ca46d1d0356dd"} Oct 14 07:08:48 crc kubenswrapper[5058]: I1014 07:08:48.829624 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aafdb52-7751-4dd5-8493-4104ee271e46" path="/var/lib/kubelet/pods/7aafdb52-7751-4dd5-8493-4104ee271e46/volumes" Oct 14 07:08:49 crc kubenswrapper[5058]: I1014 07:08:49.673606 5058 generic.go:334] "Generic (PLEG): container finished" podID="75b09a00-f000-4bcc-ab28-07699c6cbeb5" containerID="89aab696d2006e8237245cc838e2d37a49d39b388091b004bb1bf280d2d579a7" exitCode=0 Oct 14 07:08:49 crc kubenswrapper[5058]: I1014 07:08:49.673718 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vqb94" event={"ID":"75b09a00-f000-4bcc-ab28-07699c6cbeb5","Type":"ContainerDied","Data":"89aab696d2006e8237245cc838e2d37a49d39b388091b004bb1bf280d2d579a7"} Oct 14 07:08:49 crc kubenswrapper[5058]: I1014 07:08:49.678032 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerStarted","Data":"84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28"} Oct 14 07:08:50 crc kubenswrapper[5058]: I1014 07:08:50.692939 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerStarted","Data":"ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf"} Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.031440 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.165190 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-combined-ca-bundle\") pod \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.165274 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-scripts\") pod \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.165321 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqrg5\" (UniqueName: \"kubernetes.io/projected/75b09a00-f000-4bcc-ab28-07699c6cbeb5-kube-api-access-xqrg5\") pod \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.165511 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-config-data\") pod \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\" (UID: \"75b09a00-f000-4bcc-ab28-07699c6cbeb5\") " Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.169528 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-scripts" (OuterVolumeSpecName: "scripts") pod "75b09a00-f000-4bcc-ab28-07699c6cbeb5" (UID: "75b09a00-f000-4bcc-ab28-07699c6cbeb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.170168 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b09a00-f000-4bcc-ab28-07699c6cbeb5-kube-api-access-xqrg5" (OuterVolumeSpecName: "kube-api-access-xqrg5") pod "75b09a00-f000-4bcc-ab28-07699c6cbeb5" (UID: "75b09a00-f000-4bcc-ab28-07699c6cbeb5"). InnerVolumeSpecName "kube-api-access-xqrg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.190078 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75b09a00-f000-4bcc-ab28-07699c6cbeb5" (UID: "75b09a00-f000-4bcc-ab28-07699c6cbeb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.200048 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-config-data" (OuterVolumeSpecName: "config-data") pod "75b09a00-f000-4bcc-ab28-07699c6cbeb5" (UID: "75b09a00-f000-4bcc-ab28-07699c6cbeb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.267782 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqrg5\" (UniqueName: \"kubernetes.io/projected/75b09a00-f000-4bcc-ab28-07699c6cbeb5-kube-api-access-xqrg5\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.267855 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.267878 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.267896 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b09a00-f000-4bcc-ab28-07699c6cbeb5-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.730364 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vqb94" event={"ID":"75b09a00-f000-4bcc-ab28-07699c6cbeb5","Type":"ContainerDied","Data":"800823f71ebc27693a468b5e5727e493bb62a91cb73840ef92a335925be3c64a"} Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.730419 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800823f71ebc27693a468b5e5727e493bb62a91cb73840ef92a335925be3c64a" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.730381 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vqb94" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.739886 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerStarted","Data":"7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3"} Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.826478 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 07:08:51 crc kubenswrapper[5058]: E1014 07:08:51.827450 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b09a00-f000-4bcc-ab28-07699c6cbeb5" containerName="nova-cell0-conductor-db-sync" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.827482 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b09a00-f000-4bcc-ab28-07699c6cbeb5" containerName="nova-cell0-conductor-db-sync" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.827820 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b09a00-f000-4bcc-ab28-07699c6cbeb5" containerName="nova-cell0-conductor-db-sync" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.828545 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.835472 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w5txc" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.835785 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.839375 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.981234 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.981351 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmkcv\" (UniqueName: \"kubernetes.io/projected/195c5812-232a-4b0e-9c5d-ba4b1af44126-kube-api-access-gmkcv\") pod \"nova-cell0-conductor-0\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:51 crc kubenswrapper[5058]: I1014 07:08:51.981407 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.084171 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.084255 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmkcv\" (UniqueName: \"kubernetes.io/projected/195c5812-232a-4b0e-9c5d-ba4b1af44126-kube-api-access-gmkcv\") pod \"nova-cell0-conductor-0\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.084289 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.092861 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.094313 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.104959 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmkcv\" (UniqueName: \"kubernetes.io/projected/195c5812-232a-4b0e-9c5d-ba4b1af44126-kube-api-access-gmkcv\") pod \"nova-cell0-conductor-0\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.152745 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:52 crc kubenswrapper[5058]: W1014 07:08:52.674734 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod195c5812_232a_4b0e_9c5d_ba4b1af44126.slice/crio-a122f9a61d0653ac3a94a42e2128d07265ba88187826e9fd9beda152d8584b86 WatchSource:0}: Error finding container a122f9a61d0653ac3a94a42e2128d07265ba88187826e9fd9beda152d8584b86: Status 404 returned error can't find the container with id a122f9a61d0653ac3a94a42e2128d07265ba88187826e9fd9beda152d8584b86 Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.700250 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.751961 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"195c5812-232a-4b0e-9c5d-ba4b1af44126","Type":"ContainerStarted","Data":"a122f9a61d0653ac3a94a42e2128d07265ba88187826e9fd9beda152d8584b86"} Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.755908 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerStarted","Data":"c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801"} Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.757890 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 07:08:52 crc kubenswrapper[5058]: I1014 07:08:52.788023 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.266037835 podStartE2EDuration="5.788001034s" podCreationTimestamp="2025-10-14 07:08:47 +0000 UTC" firstStartedPulling="2025-10-14 07:08:48.535517166 +0000 UTC m=+1276.446601002" lastFinishedPulling="2025-10-14 07:08:52.057480355 +0000 UTC m=+1279.968564201" observedRunningTime="2025-10-14 07:08:52.780338226 +0000 UTC m=+1280.691422052" watchObservedRunningTime="2025-10-14 07:08:52.788001034 +0000 UTC m=+1280.699084840" Oct 14 07:08:53 crc kubenswrapper[5058]: I1014 07:08:53.769418 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"195c5812-232a-4b0e-9c5d-ba4b1af44126","Type":"ContainerStarted","Data":"1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec"} Oct 14 07:08:53 crc kubenswrapper[5058]: I1014 07:08:53.803491 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.803475458 podStartE2EDuration="2.803475458s" podCreationTimestamp="2025-10-14 07:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:08:53.793844725 +0000 UTC m=+1281.704928541" watchObservedRunningTime="2025-10-14 07:08:53.803475458 +0000 UTC m=+1281.714559264" Oct 14 07:08:54 crc kubenswrapper[5058]: I1014 07:08:54.776643 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.205763 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.760941 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4xllk"] Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.762378 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.772449 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4xllk"] Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.788654 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.788917 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.906469 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-config-data\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.906568 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-scripts\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.906606 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9q9x\" (UniqueName: \"kubernetes.io/projected/88700bea-57f8-4d34-a17f-5e5f85792862-kube-api-access-f9q9x\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.906661 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.951215 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.952921 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.954674 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 07:08:57 crc kubenswrapper[5058]: I1014 07:08:57.961731 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.008026 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-config-data\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.008111 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-scripts\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.008147 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9q9x\" (UniqueName: \"kubernetes.io/projected/88700bea-57f8-4d34-a17f-5e5f85792862-kube-api-access-f9q9x\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.008192 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.012651 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.014501 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.024298 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.024607 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-scripts\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.024664 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-config-data\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.028606 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.038703 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9q9x\" (UniqueName: \"kubernetes.io/projected/88700bea-57f8-4d34-a17f-5e5f85792862-kube-api-access-f9q9x\") pod \"nova-cell0-cell-mapping-4xllk\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.051879 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.082026 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.083656 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.084738 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.088241 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.097456 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.110120 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4fhg\" (UniqueName: \"kubernetes.io/projected/3d4fccfe-1d25-4088-a2b5-b49217c514c7-kube-api-access-s4fhg\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.110410 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.110484 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.110616 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4fccfe-1d25-4088-a2b5-b49217c514c7-logs\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.110670 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt9z4\" (UniqueName: \"kubernetes.io/projected/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-kube-api-access-mt9z4\") pod \"nova-cell1-novncproxy-0\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.110701 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-config-data\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.110744 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.185812 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.187035 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.195431 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.200298 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.211828 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.211882 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.211906 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-logs\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.211940 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d52bd\" (UniqueName: \"kubernetes.io/projected/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-kube-api-access-d52bd\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.211971 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4fccfe-1d25-4088-a2b5-b49217c514c7-logs\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.211998 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt9z4\" (UniqueName: \"kubernetes.io/projected/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-kube-api-access-mt9z4\") pod \"nova-cell1-novncproxy-0\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.212019 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-config-data\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.212044 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.212067 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4fhg\" (UniqueName: \"kubernetes.io/projected/3d4fccfe-1d25-4088-a2b5-b49217c514c7-kube-api-access-s4fhg\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.212100 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-config-data\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.212124 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.227101 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.227262 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4fccfe-1d25-4088-a2b5-b49217c514c7-logs\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.227769 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.260191 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.261998 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-config-data\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.275412 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4fhg\" (UniqueName: \"kubernetes.io/projected/3d4fccfe-1d25-4088-a2b5-b49217c514c7-kube-api-access-s4fhg\") pod \"nova-api-0\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.276679 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d74749bf5-4vkd7"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.277139 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt9z4\" (UniqueName: \"kubernetes.io/projected/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-kube-api-access-mt9z4\") pod \"nova-cell1-novncproxy-0\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.308198 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.308959 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.315186 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmjwc\" (UniqueName: \"kubernetes.io/projected/d5b79516-f3d1-4394-8217-f450e54a5379-kube-api-access-mmjwc\") pod \"nova-scheduler-0\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.315246 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.315285 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-logs\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.315323 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d52bd\" (UniqueName: \"kubernetes.io/projected/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-kube-api-access-d52bd\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.315411 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-config-data\") pod \"nova-scheduler-0\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.315434 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-config-data\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.315461 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.317292 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-logs\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.324098 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-config-data\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.346994 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.371627 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d74749bf5-4vkd7"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.373867 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d52bd\" (UniqueName: \"kubernetes.io/projected/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-kube-api-access-d52bd\") pod \"nova-metadata-0\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.419286 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-config\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.419335 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-config-data\") pod \"nova-scheduler-0\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.419394 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-svc\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.419447 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-swift-storage-0\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.419507 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmjwc\" (UniqueName: \"kubernetes.io/projected/d5b79516-f3d1-4394-8217-f450e54a5379-kube-api-access-mmjwc\") pod \"nova-scheduler-0\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.419532 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.419587 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-nb\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.419623 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-sb\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.419656 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llvgr\" (UniqueName: \"kubernetes.io/projected/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-kube-api-access-llvgr\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.433339 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-config-data\") pod \"nova-scheduler-0\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.437936 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmjwc\" (UniqueName: \"kubernetes.io/projected/d5b79516-f3d1-4394-8217-f450e54a5379-kube-api-access-mmjwc\") pod \"nova-scheduler-0\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.439418 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.521526 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llvgr\" (UniqueName: \"kubernetes.io/projected/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-kube-api-access-llvgr\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.521577 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-config\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.521599 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-svc\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.521631 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-swift-storage-0\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.521724 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-nb\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.521744 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-sb\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.522562 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-sb\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.522882 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-svc\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.523065 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-nb\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.523260 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-config\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.523786 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-swift-storage-0\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.539468 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llvgr\" (UniqueName: \"kubernetes.io/projected/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-kube-api-access-llvgr\") pod \"dnsmasq-dns-d74749bf5-4vkd7\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.573818 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.643194 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.658450 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.669215 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.720503 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4xllk"] Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.838784 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4xllk" event={"ID":"88700bea-57f8-4d34-a17f-5e5f85792862","Type":"ContainerStarted","Data":"7742748b3d5a13748f7f74727cf344a3cde0df58c9302986c10a57fee38716f0"} Oct 14 07:08:58 crc kubenswrapper[5058]: I1014 07:08:58.887848 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:08:58 crc kubenswrapper[5058]: W1014 07:08:58.922929 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5be17ea0_7e76_455f_8ab5_4dc85bb834c8.slice/crio-153668b849eecad64eb1379ed9ec27ff928e7f3a24c378d2175921df9266a11e WatchSource:0}: Error finding container 153668b849eecad64eb1379ed9ec27ff928e7f3a24c378d2175921df9266a11e: Status 404 returned error can't find the container with id 153668b849eecad64eb1379ed9ec27ff928e7f3a24c378d2175921df9266a11e Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.106617 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:08:59 crc kubenswrapper[5058]: W1014 07:08:59.120373 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d4fccfe_1d25_4088_a2b5_b49217c514c7.slice/crio-c5c051fac4e526af65d91f97408777c2100e8f327e26914696c1b48077ec2bf7 WatchSource:0}: Error finding container c5c051fac4e526af65d91f97408777c2100e8f327e26914696c1b48077ec2bf7: Status 404 returned error can't find the container with id c5c051fac4e526af65d91f97408777c2100e8f327e26914696c1b48077ec2bf7 Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.142324 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qk69x"] Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.143684 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.147957 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.147974 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.156574 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qk69x"] Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.244873 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-config-data\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.244946 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hfm\" (UniqueName: \"kubernetes.io/projected/15377fc0-cfba-460c-b865-e8fb66fea612-kube-api-access-w4hfm\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.245005 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.245041 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-scripts\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.272136 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.283433 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.346060 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.346122 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-scripts\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.346190 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-config-data\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.346241 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hfm\" (UniqueName: \"kubernetes.io/projected/15377fc0-cfba-460c-b865-e8fb66fea612-kube-api-access-w4hfm\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.352379 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.353131 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-scripts\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.366258 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-config-data\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.369322 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hfm\" (UniqueName: \"kubernetes.io/projected/15377fc0-cfba-460c-b865-e8fb66fea612-kube-api-access-w4hfm\") pod \"nova-cell1-conductor-db-sync-qk69x\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: W1014 07:08:59.456714 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b456d32_fcc3_4df4_b3cd_0b1ce6c83263.slice/crio-c9b3206bad8e61e787958faadb342b400d8dde49cfaf1672d672281cf0bc1ef2 WatchSource:0}: Error finding container c9b3206bad8e61e787958faadb342b400d8dde49cfaf1672d672281cf0bc1ef2: Status 404 returned error can't find the container with id c9b3206bad8e61e787958faadb342b400d8dde49cfaf1672d672281cf0bc1ef2 Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.457990 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d74749bf5-4vkd7"] Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.474811 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.858923 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5b79516-f3d1-4394-8217-f450e54a5379","Type":"ContainerStarted","Data":"d191020a585c7bef2d51d0ddd309f6779819e7dac1b9289302123197a47cc08e"} Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.863185 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5be17ea0-7e76-455f-8ab5-4dc85bb834c8","Type":"ContainerStarted","Data":"153668b849eecad64eb1379ed9ec27ff928e7f3a24c378d2175921df9266a11e"} Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.867321 5058 generic.go:334] "Generic (PLEG): container finished" podID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" containerID="d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea" exitCode=0 Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.867418 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" event={"ID":"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263","Type":"ContainerDied","Data":"d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea"} Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.867454 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" event={"ID":"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263","Type":"ContainerStarted","Data":"c9b3206bad8e61e787958faadb342b400d8dde49cfaf1672d672281cf0bc1ef2"} Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.872734 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4xllk" event={"ID":"88700bea-57f8-4d34-a17f-5e5f85792862","Type":"ContainerStarted","Data":"dd8d9264e90400aef2e4a4308ff7a28d4739a18cf3ccd1fa4028410913e9f4cb"} Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.878266 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6","Type":"ContainerStarted","Data":"9228a7143dd5a5f2cc023d2f3c77f34ed4106ac9d87629739c50d9a5736841e6"} Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.898240 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4fccfe-1d25-4088-a2b5-b49217c514c7","Type":"ContainerStarted","Data":"c5c051fac4e526af65d91f97408777c2100e8f327e26914696c1b48077ec2bf7"} Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.910634 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4xllk" podStartSLOduration=2.910617708 podStartE2EDuration="2.910617708s" podCreationTimestamp="2025-10-14 07:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:08:59.908736144 +0000 UTC m=+1287.819819950" watchObservedRunningTime="2025-10-14 07:08:59.910617708 +0000 UTC m=+1287.821701514" Oct 14 07:08:59 crc kubenswrapper[5058]: I1014 07:08:59.968187 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qk69x"] Oct 14 07:09:00 crc kubenswrapper[5058]: I1014 07:09:00.919287 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qk69x" event={"ID":"15377fc0-cfba-460c-b865-e8fb66fea612","Type":"ContainerStarted","Data":"a7097745ea8c09cfe21aa321a8f31a6de450e38a41d2d22287ccb498aca9574d"} Oct 14 07:09:00 crc kubenswrapper[5058]: I1014 07:09:00.919668 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qk69x" event={"ID":"15377fc0-cfba-460c-b865-e8fb66fea612","Type":"ContainerStarted","Data":"812f9503e23fe8b607667751ed0001e302f4a407dd5dadbe270c3bc034a4b825"} Oct 14 07:09:00 crc kubenswrapper[5058]: I1014 07:09:00.928065 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" event={"ID":"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263","Type":"ContainerStarted","Data":"a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff"} Oct 14 07:09:00 crc kubenswrapper[5058]: I1014 07:09:00.928103 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:09:00 crc kubenswrapper[5058]: I1014 07:09:00.941061 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-qk69x" podStartSLOduration=1.941048097 podStartE2EDuration="1.941048097s" podCreationTimestamp="2025-10-14 07:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:00.933889914 +0000 UTC m=+1288.844973730" watchObservedRunningTime="2025-10-14 07:09:00.941048097 +0000 UTC m=+1288.852131903" Oct 14 07:09:00 crc kubenswrapper[5058]: I1014 07:09:00.959853 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" podStartSLOduration=2.959834461 podStartE2EDuration="2.959834461s" podCreationTimestamp="2025-10-14 07:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:00.950196887 +0000 UTC m=+1288.861280713" watchObservedRunningTime="2025-10-14 07:09:00.959834461 +0000 UTC m=+1288.870918267" Oct 14 07:09:01 crc kubenswrapper[5058]: I1014 07:09:01.572720 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:01 crc kubenswrapper[5058]: I1014 07:09:01.580604 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:09:03 crc kubenswrapper[5058]: I1014 07:09:03.663352 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:09:03 crc kubenswrapper[5058]: I1014 07:09:03.663582 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:09:03 crc kubenswrapper[5058]: I1014 07:09:03.967516 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5b79516-f3d1-4394-8217-f450e54a5379","Type":"ContainerStarted","Data":"d7d4b6e292dc0c9acf26ace5013e64befa5cc3cee8ee0d1afe41ec98b86e22c0"} Oct 14 07:09:03 crc kubenswrapper[5058]: I1014 07:09:03.984374 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5be17ea0-7e76-455f-8ab5-4dc85bb834c8","Type":"ContainerStarted","Data":"70e91a24551b0735af960c2cc36eeff8a125d876bf6da56d8d6d8fff049d0832"} Oct 14 07:09:03 crc kubenswrapper[5058]: I1014 07:09:03.984761 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5be17ea0-7e76-455f-8ab5-4dc85bb834c8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://70e91a24551b0735af960c2cc36eeff8a125d876bf6da56d8d6d8fff049d0832" gracePeriod=30 Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.003330 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6","Type":"ContainerStarted","Data":"f166a159ddc4b5e5bff41474766e07018bec2ed39bfa6abf6e45d3859d643ef4"} Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.003371 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6","Type":"ContainerStarted","Data":"4d2ca79f27d42a301586f6dcab92f4bec5bd1dd7f66d712c73c46b4eceb65bf8"} Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.003491 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerName="nova-metadata-log" containerID="cri-o://4d2ca79f27d42a301586f6dcab92f4bec5bd1dd7f66d712c73c46b4eceb65bf8" gracePeriod=30 Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.003752 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerName="nova-metadata-metadata" containerID="cri-o://f166a159ddc4b5e5bff41474766e07018bec2ed39bfa6abf6e45d3859d643ef4" gracePeriod=30 Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.007231 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4fccfe-1d25-4088-a2b5-b49217c514c7","Type":"ContainerStarted","Data":"10ec47490e6421a71c56dc9c4018cd51961f35995bc57eb573f4aaacaf119eea"} Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.007263 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4fccfe-1d25-4088-a2b5-b49217c514c7","Type":"ContainerStarted","Data":"eac297a1ab7de0fc6708883d0bb399fc2467a293069482118a7213d02ee1169d"} Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.014284 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.075520083 podStartE2EDuration="6.014260545s" podCreationTimestamp="2025-10-14 07:08:58 +0000 UTC" firstStartedPulling="2025-10-14 07:08:59.323448893 +0000 UTC m=+1287.234532699" lastFinishedPulling="2025-10-14 07:09:03.262189345 +0000 UTC m=+1291.173273161" observedRunningTime="2025-10-14 07:09:03.989011437 +0000 UTC m=+1291.900095263" watchObservedRunningTime="2025-10-14 07:09:04.014260545 +0000 UTC m=+1291.925344351" Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.024559 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.710296085 podStartE2EDuration="7.024542487s" podCreationTimestamp="2025-10-14 07:08:57 +0000 UTC" firstStartedPulling="2025-10-14 07:08:58.94116554 +0000 UTC m=+1286.852249346" lastFinishedPulling="2025-10-14 07:09:03.255411922 +0000 UTC m=+1291.166495748" observedRunningTime="2025-10-14 07:09:04.018513246 +0000 UTC m=+1291.929597062" watchObservedRunningTime="2025-10-14 07:09:04.024542487 +0000 UTC m=+1291.935626293" Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.033654 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.054782053 podStartE2EDuration="6.033636705s" podCreationTimestamp="2025-10-14 07:08:58 +0000 UTC" firstStartedPulling="2025-10-14 07:08:59.279229386 +0000 UTC m=+1287.190313212" lastFinishedPulling="2025-10-14 07:09:03.258084038 +0000 UTC m=+1291.169167864" observedRunningTime="2025-10-14 07:09:04.033018318 +0000 UTC m=+1291.944102144" watchObservedRunningTime="2025-10-14 07:09:04.033636705 +0000 UTC m=+1291.944720511" Oct 14 07:09:04 crc kubenswrapper[5058]: I1014 07:09:04.054181 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.928062453 podStartE2EDuration="7.054160479s" podCreationTimestamp="2025-10-14 07:08:57 +0000 UTC" firstStartedPulling="2025-10-14 07:08:59.122075141 +0000 UTC m=+1287.033158947" lastFinishedPulling="2025-10-14 07:09:03.248173157 +0000 UTC m=+1291.159256973" observedRunningTime="2025-10-14 07:09:04.051372359 +0000 UTC m=+1291.962456185" watchObservedRunningTime="2025-10-14 07:09:04.054160479 +0000 UTC m=+1291.965244285" Oct 14 07:09:05 crc kubenswrapper[5058]: I1014 07:09:05.020399 5058 generic.go:334] "Generic (PLEG): container finished" podID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerID="4d2ca79f27d42a301586f6dcab92f4bec5bd1dd7f66d712c73c46b4eceb65bf8" exitCode=143 Oct 14 07:09:05 crc kubenswrapper[5058]: I1014 07:09:05.022158 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6","Type":"ContainerDied","Data":"4d2ca79f27d42a301586f6dcab92f4bec5bd1dd7f66d712c73c46b4eceb65bf8"} Oct 14 07:09:07 crc kubenswrapper[5058]: I1014 07:09:07.046527 5058 generic.go:334] "Generic (PLEG): container finished" podID="88700bea-57f8-4d34-a17f-5e5f85792862" containerID="dd8d9264e90400aef2e4a4308ff7a28d4739a18cf3ccd1fa4028410913e9f4cb" exitCode=0 Oct 14 07:09:07 crc kubenswrapper[5058]: I1014 07:09:07.046595 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4xllk" event={"ID":"88700bea-57f8-4d34-a17f-5e5f85792862","Type":"ContainerDied","Data":"dd8d9264e90400aef2e4a4308ff7a28d4739a18cf3ccd1fa4028410913e9f4cb"} Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.060554 5058 generic.go:334] "Generic (PLEG): container finished" podID="15377fc0-cfba-460c-b865-e8fb66fea612" containerID="a7097745ea8c09cfe21aa321a8f31a6de450e38a41d2d22287ccb498aca9574d" exitCode=0 Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.060611 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qk69x" event={"ID":"15377fc0-cfba-460c-b865-e8fb66fea612","Type":"ContainerDied","Data":"a7097745ea8c09cfe21aa321a8f31a6de450e38a41d2d22287ccb498aca9574d"} Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.309944 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.438305 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.567344 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-config-data\") pod \"88700bea-57f8-4d34-a17f-5e5f85792862\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.567524 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-combined-ca-bundle\") pod \"88700bea-57f8-4d34-a17f-5e5f85792862\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.567616 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-scripts\") pod \"88700bea-57f8-4d34-a17f-5e5f85792862\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.567730 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9q9x\" (UniqueName: \"kubernetes.io/projected/88700bea-57f8-4d34-a17f-5e5f85792862-kube-api-access-f9q9x\") pod \"88700bea-57f8-4d34-a17f-5e5f85792862\" (UID: \"88700bea-57f8-4d34-a17f-5e5f85792862\") " Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.574158 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88700bea-57f8-4d34-a17f-5e5f85792862-kube-api-access-f9q9x" (OuterVolumeSpecName: "kube-api-access-f9q9x") pod "88700bea-57f8-4d34-a17f-5e5f85792862" (UID: "88700bea-57f8-4d34-a17f-5e5f85792862"). InnerVolumeSpecName "kube-api-access-f9q9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.574656 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.574722 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.577318 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-scripts" (OuterVolumeSpecName: "scripts") pod "88700bea-57f8-4d34-a17f-5e5f85792862" (UID: "88700bea-57f8-4d34-a17f-5e5f85792862"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.603097 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-config-data" (OuterVolumeSpecName: "config-data") pod "88700bea-57f8-4d34-a17f-5e5f85792862" (UID: "88700bea-57f8-4d34-a17f-5e5f85792862"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.620020 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88700bea-57f8-4d34-a17f-5e5f85792862" (UID: "88700bea-57f8-4d34-a17f-5e5f85792862"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.645013 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.645086 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.660427 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.660828 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.670960 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.671164 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.672571 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88700bea-57f8-4d34-a17f-5e5f85792862-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.672736 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9q9x\" (UniqueName: \"kubernetes.io/projected/88700bea-57f8-4d34-a17f-5e5f85792862-kube-api-access-f9q9x\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.673132 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.706985 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.768047 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb9f44c77-2q9fb"] Oct 14 07:09:08 crc kubenswrapper[5058]: I1014 07:09:08.768304 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" podUID="f5850000-d8e3-4fc4-ae68-180280760d79" containerName="dnsmasq-dns" containerID="cri-o://1545e5441d594a102205d61027ae292f75ddd746f5bd8cfb20d42ef0778bf558" gracePeriod=10 Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.075826 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4xllk" event={"ID":"88700bea-57f8-4d34-a17f-5e5f85792862","Type":"ContainerDied","Data":"7742748b3d5a13748f7f74727cf344a3cde0df58c9302986c10a57fee38716f0"} Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.076210 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7742748b3d5a13748f7f74727cf344a3cde0df58c9302986c10a57fee38716f0" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.076303 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4xllk" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.080652 5058 generic.go:334] "Generic (PLEG): container finished" podID="f5850000-d8e3-4fc4-ae68-180280760d79" containerID="1545e5441d594a102205d61027ae292f75ddd746f5bd8cfb20d42ef0778bf558" exitCode=0 Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.080824 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" event={"ID":"f5850000-d8e3-4fc4-ae68-180280760d79","Type":"ContainerDied","Data":"1545e5441d594a102205d61027ae292f75ddd746f5bd8cfb20d42ef0778bf558"} Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.155431 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.262324 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.263036 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-log" containerID="cri-o://eac297a1ab7de0fc6708883d0bb399fc2467a293069482118a7213d02ee1169d" gracePeriod=30 Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.263582 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-api" containerID="cri-o://10ec47490e6421a71c56dc9c4018cd51961f35995bc57eb573f4aaacaf119eea" gracePeriod=30 Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.274318 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": EOF" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.274917 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.291821 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": EOF" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.399680 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-svc\") pod \"f5850000-d8e3-4fc4-ae68-180280760d79\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.399767 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79xk2\" (UniqueName: \"kubernetes.io/projected/f5850000-d8e3-4fc4-ae68-180280760d79-kube-api-access-79xk2\") pod \"f5850000-d8e3-4fc4-ae68-180280760d79\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.399894 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-sb\") pod \"f5850000-d8e3-4fc4-ae68-180280760d79\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.399955 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-config\") pod \"f5850000-d8e3-4fc4-ae68-180280760d79\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.399988 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-swift-storage-0\") pod \"f5850000-d8e3-4fc4-ae68-180280760d79\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.400014 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-nb\") pod \"f5850000-d8e3-4fc4-ae68-180280760d79\" (UID: \"f5850000-d8e3-4fc4-ae68-180280760d79\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.409562 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5850000-d8e3-4fc4-ae68-180280760d79-kube-api-access-79xk2" (OuterVolumeSpecName: "kube-api-access-79xk2") pod "f5850000-d8e3-4fc4-ae68-180280760d79" (UID: "f5850000-d8e3-4fc4-ae68-180280760d79"). InnerVolumeSpecName "kube-api-access-79xk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.453039 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-config" (OuterVolumeSpecName: "config") pod "f5850000-d8e3-4fc4-ae68-180280760d79" (UID: "f5850000-d8e3-4fc4-ae68-180280760d79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.454046 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5850000-d8e3-4fc4-ae68-180280760d79" (UID: "f5850000-d8e3-4fc4-ae68-180280760d79"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.455962 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5850000-d8e3-4fc4-ae68-180280760d79" (UID: "f5850000-d8e3-4fc4-ae68-180280760d79"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.456475 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f5850000-d8e3-4fc4-ae68-180280760d79" (UID: "f5850000-d8e3-4fc4-ae68-180280760d79"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.468562 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5850000-d8e3-4fc4-ae68-180280760d79" (UID: "f5850000-d8e3-4fc4-ae68-180280760d79"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.502358 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79xk2\" (UniqueName: \"kubernetes.io/projected/f5850000-d8e3-4fc4-ae68-180280760d79-kube-api-access-79xk2\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.502391 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.502403 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.502417 5058 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.502425 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.502434 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5850000-d8e3-4fc4-ae68-180280760d79-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.557467 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.677378 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.705439 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4hfm\" (UniqueName: \"kubernetes.io/projected/15377fc0-cfba-460c-b865-e8fb66fea612-kube-api-access-w4hfm\") pod \"15377fc0-cfba-460c-b865-e8fb66fea612\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.705931 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-scripts\") pod \"15377fc0-cfba-460c-b865-e8fb66fea612\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.705977 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-combined-ca-bundle\") pod \"15377fc0-cfba-460c-b865-e8fb66fea612\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.706027 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-config-data\") pod \"15377fc0-cfba-460c-b865-e8fb66fea612\" (UID: \"15377fc0-cfba-460c-b865-e8fb66fea612\") " Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.709723 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15377fc0-cfba-460c-b865-e8fb66fea612-kube-api-access-w4hfm" (OuterVolumeSpecName: "kube-api-access-w4hfm") pod "15377fc0-cfba-460c-b865-e8fb66fea612" (UID: "15377fc0-cfba-460c-b865-e8fb66fea612"). InnerVolumeSpecName "kube-api-access-w4hfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.713010 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-scripts" (OuterVolumeSpecName: "scripts") pod "15377fc0-cfba-460c-b865-e8fb66fea612" (UID: "15377fc0-cfba-460c-b865-e8fb66fea612"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.743045 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15377fc0-cfba-460c-b865-e8fb66fea612" (UID: "15377fc0-cfba-460c-b865-e8fb66fea612"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.749663 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-config-data" (OuterVolumeSpecName: "config-data") pod "15377fc0-cfba-460c-b865-e8fb66fea612" (UID: "15377fc0-cfba-460c-b865-e8fb66fea612"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.808297 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.808336 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.808350 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15377fc0-cfba-460c-b865-e8fb66fea612-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:09 crc kubenswrapper[5058]: I1014 07:09:09.808365 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4hfm\" (UniqueName: \"kubernetes.io/projected/15377fc0-cfba-460c-b865-e8fb66fea612-kube-api-access-w4hfm\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.091061 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qk69x" event={"ID":"15377fc0-cfba-460c-b865-e8fb66fea612","Type":"ContainerDied","Data":"812f9503e23fe8b607667751ed0001e302f4a407dd5dadbe270c3bc034a4b825"} Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.091103 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="812f9503e23fe8b607667751ed0001e302f4a407dd5dadbe270c3bc034a4b825" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.091167 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qk69x" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.096240 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" event={"ID":"f5850000-d8e3-4fc4-ae68-180280760d79","Type":"ContainerDied","Data":"81abeccb667e00722161383fbb358b33fcb1c7d32ac4bf967e5d25c87493922f"} Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.096307 5058 scope.go:117] "RemoveContainer" containerID="1545e5441d594a102205d61027ae292f75ddd746f5bd8cfb20d42ef0778bf558" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.096479 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb9f44c77-2q9fb" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.102414 5058 generic.go:334] "Generic (PLEG): container finished" podID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerID="eac297a1ab7de0fc6708883d0bb399fc2467a293069482118a7213d02ee1169d" exitCode=143 Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.102472 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4fccfe-1d25-4088-a2b5-b49217c514c7","Type":"ContainerDied","Data":"eac297a1ab7de0fc6708883d0bb399fc2467a293069482118a7213d02ee1169d"} Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.165322 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 07:09:10 crc kubenswrapper[5058]: E1014 07:09:10.165656 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88700bea-57f8-4d34-a17f-5e5f85792862" containerName="nova-manage" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.165666 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="88700bea-57f8-4d34-a17f-5e5f85792862" containerName="nova-manage" Oct 14 07:09:10 crc kubenswrapper[5058]: E1014 07:09:10.165678 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5850000-d8e3-4fc4-ae68-180280760d79" containerName="init" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.165684 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5850000-d8e3-4fc4-ae68-180280760d79" containerName="init" Oct 14 07:09:10 crc kubenswrapper[5058]: E1014 07:09:10.165698 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5850000-d8e3-4fc4-ae68-180280760d79" containerName="dnsmasq-dns" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.165706 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5850000-d8e3-4fc4-ae68-180280760d79" containerName="dnsmasq-dns" Oct 14 07:09:10 crc kubenswrapper[5058]: E1014 07:09:10.165715 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15377fc0-cfba-460c-b865-e8fb66fea612" containerName="nova-cell1-conductor-db-sync" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.165720 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="15377fc0-cfba-460c-b865-e8fb66fea612" containerName="nova-cell1-conductor-db-sync" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.165895 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="15377fc0-cfba-460c-b865-e8fb66fea612" containerName="nova-cell1-conductor-db-sync" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.165920 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5850000-d8e3-4fc4-ae68-180280760d79" containerName="dnsmasq-dns" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.165932 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="88700bea-57f8-4d34-a17f-5e5f85792862" containerName="nova-manage" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.166441 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.166502 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.184591 5058 scope.go:117] "RemoveContainer" containerID="eb5db8b46c6650d243b0539e445eda0e01178b6c8fe4208fb52d0dd48872f6cf" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.184890 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.217292 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb9f44c77-2q9fb"] Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.225209 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb9f44c77-2q9fb"] Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.318222 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnc9g\" (UniqueName: \"kubernetes.io/projected/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-kube-api-access-dnc9g\") pod \"nova-cell1-conductor-0\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.318341 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.318394 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.420079 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnc9g\" (UniqueName: \"kubernetes.io/projected/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-kube-api-access-dnc9g\") pod \"nova-cell1-conductor-0\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.420464 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.420598 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.424279 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.434540 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.438426 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnc9g\" (UniqueName: \"kubernetes.io/projected/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-kube-api-access-dnc9g\") pod \"nova-cell1-conductor-0\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.522640 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.808203 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5850000-d8e3-4fc4-ae68-180280760d79" path="/var/lib/kubelet/pods/f5850000-d8e3-4fc4-ae68-180280760d79/volumes" Oct 14 07:09:10 crc kubenswrapper[5058]: W1014 07:09:10.979860 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c7b2987_5495_4f39_a2d0_0c2461b31e1e.slice/crio-b225081c585653bb06f321832b00b75ebbcb889126ad1aa580ec341f65c9eeae WatchSource:0}: Error finding container b225081c585653bb06f321832b00b75ebbcb889126ad1aa580ec341f65c9eeae: Status 404 returned error can't find the container with id b225081c585653bb06f321832b00b75ebbcb889126ad1aa580ec341f65c9eeae Oct 14 07:09:10 crc kubenswrapper[5058]: I1014 07:09:10.984310 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 07:09:11 crc kubenswrapper[5058]: I1014 07:09:11.120317 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5c7b2987-5495-4f39-a2d0-0c2461b31e1e","Type":"ContainerStarted","Data":"b225081c585653bb06f321832b00b75ebbcb889126ad1aa580ec341f65c9eeae"} Oct 14 07:09:11 crc kubenswrapper[5058]: I1014 07:09:11.120376 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d5b79516-f3d1-4394-8217-f450e54a5379" containerName="nova-scheduler-scheduler" containerID="cri-o://d7d4b6e292dc0c9acf26ace5013e64befa5cc3cee8ee0d1afe41ec98b86e22c0" gracePeriod=30 Oct 14 07:09:12 crc kubenswrapper[5058]: I1014 07:09:12.133976 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5c7b2987-5495-4f39-a2d0-0c2461b31e1e","Type":"ContainerStarted","Data":"84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a"} Oct 14 07:09:12 crc kubenswrapper[5058]: I1014 07:09:12.134900 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:12 crc kubenswrapper[5058]: I1014 07:09:12.166368 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.166339802 podStartE2EDuration="2.166339802s" podCreationTimestamp="2025-10-14 07:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:12.157442179 +0000 UTC m=+1300.068526055" watchObservedRunningTime="2025-10-14 07:09:12.166339802 +0000 UTC m=+1300.077423648" Oct 14 07:09:13 crc kubenswrapper[5058]: E1014 07:09:13.661361 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d7d4b6e292dc0c9acf26ace5013e64befa5cc3cee8ee0d1afe41ec98b86e22c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 07:09:13 crc kubenswrapper[5058]: E1014 07:09:13.664019 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d7d4b6e292dc0c9acf26ace5013e64befa5cc3cee8ee0d1afe41ec98b86e22c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 07:09:13 crc kubenswrapper[5058]: E1014 07:09:13.665822 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d7d4b6e292dc0c9acf26ace5013e64befa5cc3cee8ee0d1afe41ec98b86e22c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 07:09:13 crc kubenswrapper[5058]: E1014 07:09:13.665922 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d5b79516-f3d1-4394-8217-f450e54a5379" containerName="nova-scheduler-scheduler" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.173465 5058 generic.go:334] "Generic (PLEG): container finished" podID="d5b79516-f3d1-4394-8217-f450e54a5379" containerID="d7d4b6e292dc0c9acf26ace5013e64befa5cc3cee8ee0d1afe41ec98b86e22c0" exitCode=0 Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.173547 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5b79516-f3d1-4394-8217-f450e54a5379","Type":"ContainerDied","Data":"d7d4b6e292dc0c9acf26ace5013e64befa5cc3cee8ee0d1afe41ec98b86e22c0"} Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.176414 5058 generic.go:334] "Generic (PLEG): container finished" podID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerID="10ec47490e6421a71c56dc9c4018cd51961f35995bc57eb573f4aaacaf119eea" exitCode=0 Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.176460 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4fccfe-1d25-4088-a2b5-b49217c514c7","Type":"ContainerDied","Data":"10ec47490e6421a71c56dc9c4018cd51961f35995bc57eb573f4aaacaf119eea"} Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.176489 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d4fccfe-1d25-4088-a2b5-b49217c514c7","Type":"ContainerDied","Data":"c5c051fac4e526af65d91f97408777c2100e8f327e26914696c1b48077ec2bf7"} Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.176503 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5c051fac4e526af65d91f97408777c2100e8f327e26914696c1b48077ec2bf7" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.195574 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.320725 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4fhg\" (UniqueName: \"kubernetes.io/projected/3d4fccfe-1d25-4088-a2b5-b49217c514c7-kube-api-access-s4fhg\") pod \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.320881 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-combined-ca-bundle\") pod \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.321047 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4fccfe-1d25-4088-a2b5-b49217c514c7-logs\") pod \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.321089 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-config-data\") pod \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\" (UID: \"3d4fccfe-1d25-4088-a2b5-b49217c514c7\") " Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.321959 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4fccfe-1d25-4088-a2b5-b49217c514c7-logs" (OuterVolumeSpecName: "logs") pod "3d4fccfe-1d25-4088-a2b5-b49217c514c7" (UID: "3d4fccfe-1d25-4088-a2b5-b49217c514c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.326832 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4fccfe-1d25-4088-a2b5-b49217c514c7-kube-api-access-s4fhg" (OuterVolumeSpecName: "kube-api-access-s4fhg") pod "3d4fccfe-1d25-4088-a2b5-b49217c514c7" (UID: "3d4fccfe-1d25-4088-a2b5-b49217c514c7"). InnerVolumeSpecName "kube-api-access-s4fhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.349771 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d4fccfe-1d25-4088-a2b5-b49217c514c7" (UID: "3d4fccfe-1d25-4088-a2b5-b49217c514c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.358293 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-config-data" (OuterVolumeSpecName: "config-data") pod "3d4fccfe-1d25-4088-a2b5-b49217c514c7" (UID: "3d4fccfe-1d25-4088-a2b5-b49217c514c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.393938 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.423516 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d4fccfe-1d25-4088-a2b5-b49217c514c7-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.423931 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.424100 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4fhg\" (UniqueName: \"kubernetes.io/projected/3d4fccfe-1d25-4088-a2b5-b49217c514c7-kube-api-access-s4fhg\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.424245 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4fccfe-1d25-4088-a2b5-b49217c514c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.525446 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-config-data\") pod \"d5b79516-f3d1-4394-8217-f450e54a5379\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.525630 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmjwc\" (UniqueName: \"kubernetes.io/projected/d5b79516-f3d1-4394-8217-f450e54a5379-kube-api-access-mmjwc\") pod \"d5b79516-f3d1-4394-8217-f450e54a5379\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.525649 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-combined-ca-bundle\") pod \"d5b79516-f3d1-4394-8217-f450e54a5379\" (UID: \"d5b79516-f3d1-4394-8217-f450e54a5379\") " Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.529328 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b79516-f3d1-4394-8217-f450e54a5379-kube-api-access-mmjwc" (OuterVolumeSpecName: "kube-api-access-mmjwc") pod "d5b79516-f3d1-4394-8217-f450e54a5379" (UID: "d5b79516-f3d1-4394-8217-f450e54a5379"). InnerVolumeSpecName "kube-api-access-mmjwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.555837 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5b79516-f3d1-4394-8217-f450e54a5379" (UID: "d5b79516-f3d1-4394-8217-f450e54a5379"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.562480 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-config-data" (OuterVolumeSpecName: "config-data") pod "d5b79516-f3d1-4394-8217-f450e54a5379" (UID: "d5b79516-f3d1-4394-8217-f450e54a5379"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.627615 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmjwc\" (UniqueName: \"kubernetes.io/projected/d5b79516-f3d1-4394-8217-f450e54a5379-kube-api-access-mmjwc\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.627653 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:15 crc kubenswrapper[5058]: I1014 07:09:15.627668 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b79516-f3d1-4394-8217-f450e54a5379-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.191992 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.191994 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.192016 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5b79516-f3d1-4394-8217-f450e54a5379","Type":"ContainerDied","Data":"d191020a585c7bef2d51d0ddd309f6779819e7dac1b9289302123197a47cc08e"} Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.194753 5058 scope.go:117] "RemoveContainer" containerID="d7d4b6e292dc0c9acf26ace5013e64befa5cc3cee8ee0d1afe41ec98b86e22c0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.294202 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.320078 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.324788 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.335625 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.344112 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:09:16 crc kubenswrapper[5058]: E1014 07:09:16.345292 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-log" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.345356 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-log" Oct 14 07:09:16 crc kubenswrapper[5058]: E1014 07:09:16.345397 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-api" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.345409 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-api" Oct 14 07:09:16 crc kubenswrapper[5058]: E1014 07:09:16.345446 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b79516-f3d1-4394-8217-f450e54a5379" containerName="nova-scheduler-scheduler" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.345458 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b79516-f3d1-4394-8217-f450e54a5379" containerName="nova-scheduler-scheduler" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.345776 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-log" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.345847 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b79516-f3d1-4394-8217-f450e54a5379" containerName="nova-scheduler-scheduler" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.345875 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" containerName="nova-api-api" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.347218 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.349750 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.351852 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.353589 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.355166 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.364084 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.374959 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.454914 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt6qp\" (UniqueName: \"kubernetes.io/projected/59099423-0d1d-46bb-97f7-22fb9099467b-kube-api-access-vt6qp\") pod \"nova-scheduler-0\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.454998 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s86w\" (UniqueName: \"kubernetes.io/projected/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-kube-api-access-2s86w\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.455044 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.455085 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-config-data\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.455144 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-config-data\") pod \"nova-scheduler-0\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.455264 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-logs\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.455332 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.556856 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.556946 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-config-data\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.557014 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-config-data\") pod \"nova-scheduler-0\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.557051 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-logs\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.557071 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.557113 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt6qp\" (UniqueName: \"kubernetes.io/projected/59099423-0d1d-46bb-97f7-22fb9099467b-kube-api-access-vt6qp\") pod \"nova-scheduler-0\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.557137 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s86w\" (UniqueName: \"kubernetes.io/projected/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-kube-api-access-2s86w\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.557690 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-logs\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.561474 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.563626 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-config-data\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.565169 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.566255 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-config-data\") pod \"nova-scheduler-0\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.573310 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s86w\" (UniqueName: \"kubernetes.io/projected/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-kube-api-access-2s86w\") pod \"nova-api-0\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.581785 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt6qp\" (UniqueName: \"kubernetes.io/projected/59099423-0d1d-46bb-97f7-22fb9099467b-kube-api-access-vt6qp\") pod \"nova-scheduler-0\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.674904 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.680659 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.830678 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4fccfe-1d25-4088-a2b5-b49217c514c7" path="/var/lib/kubelet/pods/3d4fccfe-1d25-4088-a2b5-b49217c514c7/volumes" Oct 14 07:09:16 crc kubenswrapper[5058]: I1014 07:09:16.832694 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b79516-f3d1-4394-8217-f450e54a5379" path="/var/lib/kubelet/pods/d5b79516-f3d1-4394-8217-f450e54a5379/volumes" Oct 14 07:09:17 crc kubenswrapper[5058]: I1014 07:09:17.207491 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:17 crc kubenswrapper[5058]: I1014 07:09:17.235182 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:09:17 crc kubenswrapper[5058]: W1014 07:09:17.241677 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59099423_0d1d_46bb_97f7_22fb9099467b.slice/crio-1716a3964d60f1231e8c5314784b187dd4b348ad775e762a0d0dd9bc75d81b6e WatchSource:0}: Error finding container 1716a3964d60f1231e8c5314784b187dd4b348ad775e762a0d0dd9bc75d81b6e: Status 404 returned error can't find the container with id 1716a3964d60f1231e8c5314784b187dd4b348ad775e762a0d0dd9bc75d81b6e Oct 14 07:09:18 crc kubenswrapper[5058]: I1014 07:09:18.066516 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 07:09:18 crc kubenswrapper[5058]: I1014 07:09:18.230477 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce8bdf24-6ff3-4644-a102-840f9a8f68bd","Type":"ContainerStarted","Data":"933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec"} Oct 14 07:09:18 crc kubenswrapper[5058]: I1014 07:09:18.230559 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce8bdf24-6ff3-4644-a102-840f9a8f68bd","Type":"ContainerStarted","Data":"1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5"} Oct 14 07:09:18 crc kubenswrapper[5058]: I1014 07:09:18.230574 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce8bdf24-6ff3-4644-a102-840f9a8f68bd","Type":"ContainerStarted","Data":"6a8a99f1036287097dad26f12314dd1bb43d5b61b8ba7d1aec02a8dac3177584"} Oct 14 07:09:18 crc kubenswrapper[5058]: I1014 07:09:18.233619 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59099423-0d1d-46bb-97f7-22fb9099467b","Type":"ContainerStarted","Data":"4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746"} Oct 14 07:09:18 crc kubenswrapper[5058]: I1014 07:09:18.233867 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59099423-0d1d-46bb-97f7-22fb9099467b","Type":"ContainerStarted","Data":"1716a3964d60f1231e8c5314784b187dd4b348ad775e762a0d0dd9bc75d81b6e"} Oct 14 07:09:18 crc kubenswrapper[5058]: I1014 07:09:18.259652 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.259632967 podStartE2EDuration="2.259632967s" podCreationTimestamp="2025-10-14 07:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:18.247492822 +0000 UTC m=+1306.158576638" watchObservedRunningTime="2025-10-14 07:09:18.259632967 +0000 UTC m=+1306.170716773" Oct 14 07:09:18 crc kubenswrapper[5058]: I1014 07:09:18.267340 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.267324616 podStartE2EDuration="2.267324616s" podCreationTimestamp="2025-10-14 07:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:18.262928331 +0000 UTC m=+1306.174012147" watchObservedRunningTime="2025-10-14 07:09:18.267324616 +0000 UTC m=+1306.178408422" Oct 14 07:09:20 crc kubenswrapper[5058]: I1014 07:09:20.579297 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 14 07:09:21 crc kubenswrapper[5058]: I1014 07:09:21.675105 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 07:09:22 crc kubenswrapper[5058]: I1014 07:09:22.236910 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:09:22 crc kubenswrapper[5058]: I1014 07:09:22.237510 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="527d8486-b4e1-4ee0-8618-c1bcfb73139b" containerName="kube-state-metrics" containerID="cri-o://d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f" gracePeriod=30 Oct 14 07:09:22 crc kubenswrapper[5058]: I1014 07:09:22.798032 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 07:09:22 crc kubenswrapper[5058]: I1014 07:09:22.895733 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qthkl\" (UniqueName: \"kubernetes.io/projected/527d8486-b4e1-4ee0-8618-c1bcfb73139b-kube-api-access-qthkl\") pod \"527d8486-b4e1-4ee0-8618-c1bcfb73139b\" (UID: \"527d8486-b4e1-4ee0-8618-c1bcfb73139b\") " Oct 14 07:09:22 crc kubenswrapper[5058]: I1014 07:09:22.901983 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527d8486-b4e1-4ee0-8618-c1bcfb73139b-kube-api-access-qthkl" (OuterVolumeSpecName: "kube-api-access-qthkl") pod "527d8486-b4e1-4ee0-8618-c1bcfb73139b" (UID: "527d8486-b4e1-4ee0-8618-c1bcfb73139b"). InnerVolumeSpecName "kube-api-access-qthkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:22.999366 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qthkl\" (UniqueName: \"kubernetes.io/projected/527d8486-b4e1-4ee0-8618-c1bcfb73139b-kube-api-access-qthkl\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.287826 5058 generic.go:334] "Generic (PLEG): container finished" podID="527d8486-b4e1-4ee0-8618-c1bcfb73139b" containerID="d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f" exitCode=2 Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.287882 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"527d8486-b4e1-4ee0-8618-c1bcfb73139b","Type":"ContainerDied","Data":"d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f"} Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.287920 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"527d8486-b4e1-4ee0-8618-c1bcfb73139b","Type":"ContainerDied","Data":"f753805f248e4265b5bfca9660824a6e7b368fb84e1f490753afaa3f99e0d1f0"} Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.287925 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.287941 5058 scope.go:117] "RemoveContainer" containerID="d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.317451 5058 scope.go:117] "RemoveContainer" containerID="d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f" Oct 14 07:09:23 crc kubenswrapper[5058]: E1014 07:09:23.318044 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f\": container with ID starting with d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f not found: ID does not exist" containerID="d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.318106 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f"} err="failed to get container status \"d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f\": rpc error: code = NotFound desc = could not find container \"d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f\": container with ID starting with d923886eb2621924442093ac92cb8671e97c7957906306f18aa3143355df009f not found: ID does not exist" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.343425 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.360962 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.369602 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:09:23 crc kubenswrapper[5058]: E1014 07:09:23.369972 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527d8486-b4e1-4ee0-8618-c1bcfb73139b" containerName="kube-state-metrics" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.369989 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="527d8486-b4e1-4ee0-8618-c1bcfb73139b" containerName="kube-state-metrics" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.370179 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="527d8486-b4e1-4ee0-8618-c1bcfb73139b" containerName="kube-state-metrics" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.370753 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.373228 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.373466 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.383563 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.511108 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.511215 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.511252 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.511322 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxdjk\" (UniqueName: \"kubernetes.io/projected/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-api-access-mxdjk\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.613004 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.613413 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.613484 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxdjk\" (UniqueName: \"kubernetes.io/projected/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-api-access-mxdjk\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.613535 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.620387 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.620510 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.622212 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.631075 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxdjk\" (UniqueName: \"kubernetes.io/projected/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-api-access-mxdjk\") pod \"kube-state-metrics-0\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " pod="openstack/kube-state-metrics-0" Oct 14 07:09:23 crc kubenswrapper[5058]: I1014 07:09:23.698698 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.060679 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.064636 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="ceilometer-central-agent" containerID="cri-o://84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28" gracePeriod=30 Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.065477 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="sg-core" containerID="cri-o://7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3" gracePeriod=30 Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.065467 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="ceilometer-notification-agent" containerID="cri-o://ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf" gracePeriod=30 Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.065712 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="proxy-httpd" containerID="cri-o://c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801" gracePeriod=30 Oct 14 07:09:24 crc kubenswrapper[5058]: W1014 07:09:24.205983 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67eb8142_3269_43d9_a2bc_a6afbaa991f9.slice/crio-8686a2eb0a55583dd3a52cfa20ae3a9eee2604f09db5d25817137d5a43be23dd WatchSource:0}: Error finding container 8686a2eb0a55583dd3a52cfa20ae3a9eee2604f09db5d25817137d5a43be23dd: Status 404 returned error can't find the container with id 8686a2eb0a55583dd3a52cfa20ae3a9eee2604f09db5d25817137d5a43be23dd Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.209056 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.300465 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67eb8142-3269-43d9-a2bc-a6afbaa991f9","Type":"ContainerStarted","Data":"8686a2eb0a55583dd3a52cfa20ae3a9eee2604f09db5d25817137d5a43be23dd"} Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.302224 5058 generic.go:334] "Generic (PLEG): container finished" podID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerID="c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801" exitCode=0 Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.302286 5058 generic.go:334] "Generic (PLEG): container finished" podID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerID="7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3" exitCode=2 Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.302303 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerDied","Data":"c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801"} Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.302318 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerDied","Data":"7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3"} Oct 14 07:09:24 crc kubenswrapper[5058]: I1014 07:09:24.801294 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527d8486-b4e1-4ee0-8618-c1bcfb73139b" path="/var/lib/kubelet/pods/527d8486-b4e1-4ee0-8618-c1bcfb73139b/volumes" Oct 14 07:09:25 crc kubenswrapper[5058]: I1014 07:09:25.325531 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67eb8142-3269-43d9-a2bc-a6afbaa991f9","Type":"ContainerStarted","Data":"76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230"} Oct 14 07:09:25 crc kubenswrapper[5058]: I1014 07:09:25.326112 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 07:09:25 crc kubenswrapper[5058]: I1014 07:09:25.328634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerDied","Data":"84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28"} Oct 14 07:09:25 crc kubenswrapper[5058]: I1014 07:09:25.328539 5058 generic.go:334] "Generic (PLEG): container finished" podID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerID="84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28" exitCode=0 Oct 14 07:09:25 crc kubenswrapper[5058]: I1014 07:09:25.354468 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.728393492 podStartE2EDuration="2.354446842s" podCreationTimestamp="2025-10-14 07:09:23 +0000 UTC" firstStartedPulling="2025-10-14 07:09:24.208611043 +0000 UTC m=+1312.119694849" lastFinishedPulling="2025-10-14 07:09:24.834664403 +0000 UTC m=+1312.745748199" observedRunningTime="2025-10-14 07:09:25.350641403 +0000 UTC m=+1313.261725239" watchObservedRunningTime="2025-10-14 07:09:25.354446842 +0000 UTC m=+1313.265530648" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.106000 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.164932 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-combined-ca-bundle\") pod \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.165012 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-sg-core-conf-yaml\") pod \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.165058 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-config-data\") pod \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.165174 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klph2\" (UniqueName: \"kubernetes.io/projected/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-kube-api-access-klph2\") pod \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.165207 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-scripts\") pod \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.165333 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-log-httpd\") pod \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.165507 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-run-httpd\") pod \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\" (UID: \"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e\") " Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.167034 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" (UID: "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.167259 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" (UID: "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.172672 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-scripts" (OuterVolumeSpecName: "scripts") pod "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" (UID: "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.172822 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-kube-api-access-klph2" (OuterVolumeSpecName: "kube-api-access-klph2") pod "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" (UID: "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e"). InnerVolumeSpecName "kube-api-access-klph2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.218298 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" (UID: "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.254943 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" (UID: "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.267487 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.267519 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.267534 5058 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.267546 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klph2\" (UniqueName: \"kubernetes.io/projected/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-kube-api-access-klph2\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.267558 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.267567 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.279480 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-config-data" (OuterVolumeSpecName: "config-data") pod "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" (UID: "6cdac73f-f4a2-4ce0-a8c0-75912eeb203e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.342889 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.342973 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerDied","Data":"ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf"} Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.342876 5058 generic.go:334] "Generic (PLEG): container finished" podID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerID="ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf" exitCode=0 Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.343266 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cdac73f-f4a2-4ce0-a8c0-75912eeb203e","Type":"ContainerDied","Data":"50cd86af4f288a1c38852a7808da708ad5102a6a5c64f02d675ca46d1d0356dd"} Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.344269 5058 scope.go:117] "RemoveContainer" containerID="c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.372732 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.401611 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.410252 5058 scope.go:117] "RemoveContainer" containerID="7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.464121 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.477639 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:26 crc kubenswrapper[5058]: E1014 07:09:26.478114 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="proxy-httpd" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.478141 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="proxy-httpd" Oct 14 07:09:26 crc kubenswrapper[5058]: E1014 07:09:26.478166 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="ceilometer-central-agent" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.478174 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="ceilometer-central-agent" Oct 14 07:09:26 crc kubenswrapper[5058]: E1014 07:09:26.478201 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="ceilometer-notification-agent" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.478210 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="ceilometer-notification-agent" Oct 14 07:09:26 crc kubenswrapper[5058]: E1014 07:09:26.478223 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="sg-core" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.478230 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="sg-core" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.478505 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="ceilometer-central-agent" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.478520 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="proxy-httpd" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.478529 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="sg-core" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.478547 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" containerName="ceilometer-notification-agent" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.486784 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.489477 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.489690 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.491058 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.492926 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.499401 5058 scope.go:117] "RemoveContainer" containerID="ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.529555 5058 scope.go:117] "RemoveContainer" containerID="84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.549109 5058 scope.go:117] "RemoveContainer" containerID="c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801" Oct 14 07:09:26 crc kubenswrapper[5058]: E1014 07:09:26.549540 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801\": container with ID starting with c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801 not found: ID does not exist" containerID="c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.549571 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801"} err="failed to get container status \"c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801\": rpc error: code = NotFound desc = could not find container \"c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801\": container with ID starting with c5a06ec0da9dfcdfbe4813b4eba180c4710d09a4046c35977143013a3906b801 not found: ID does not exist" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.549591 5058 scope.go:117] "RemoveContainer" containerID="7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3" Oct 14 07:09:26 crc kubenswrapper[5058]: E1014 07:09:26.549983 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3\": container with ID starting with 7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3 not found: ID does not exist" containerID="7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.550049 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3"} err="failed to get container status \"7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3\": rpc error: code = NotFound desc = could not find container \"7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3\": container with ID starting with 7788301355a7bf5bab933598471f5a6b85851f41be655f8ec5e0de81be453af3 not found: ID does not exist" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.550100 5058 scope.go:117] "RemoveContainer" containerID="ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf" Oct 14 07:09:26 crc kubenswrapper[5058]: E1014 07:09:26.550431 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf\": container with ID starting with ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf not found: ID does not exist" containerID="ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.550491 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf"} err="failed to get container status \"ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf\": rpc error: code = NotFound desc = could not find container \"ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf\": container with ID starting with ffd5f10711995e807ed99e3be35b7bee3cde671d9adbcfe64d7b4165027d05cf not found: ID does not exist" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.550532 5058 scope.go:117] "RemoveContainer" containerID="84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28" Oct 14 07:09:26 crc kubenswrapper[5058]: E1014 07:09:26.551140 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28\": container with ID starting with 84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28 not found: ID does not exist" containerID="84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.551197 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28"} err="failed to get container status \"84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28\": rpc error: code = NotFound desc = could not find container \"84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28\": container with ID starting with 84b386a3ab03fe2a3c598058eddacb0a6e4b03aef82cc7786aad9330d6043a28 not found: ID does not exist" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.579912 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-run-httpd\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.580000 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-config-data\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.580038 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjrcm\" (UniqueName: \"kubernetes.io/projected/c1194eea-529b-4873-8665-c22c2fbb2b27-kube-api-access-jjrcm\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.580098 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-log-httpd\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.580173 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-scripts\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.580335 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.580491 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.580558 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.676010 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.681391 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.681450 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.682146 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.682181 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.682210 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-run-httpd\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.682247 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-config-data\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.682267 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjrcm\" (UniqueName: \"kubernetes.io/projected/c1194eea-529b-4873-8665-c22c2fbb2b27-kube-api-access-jjrcm\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.682307 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-log-httpd\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.682338 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-scripts\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.682372 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.682892 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-run-httpd\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.684323 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-log-httpd\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.685616 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.688733 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-config-data\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.691396 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.691583 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.692353 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-scripts\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.701730 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjrcm\" (UniqueName: \"kubernetes.io/projected/c1194eea-529b-4873-8665-c22c2fbb2b27-kube-api-access-jjrcm\") pod \"ceilometer-0\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " pod="openstack/ceilometer-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.710639 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.803534 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cdac73f-f4a2-4ce0-a8c0-75912eeb203e" path="/var/lib/kubelet/pods/6cdac73f-f4a2-4ce0-a8c0-75912eeb203e/volumes" Oct 14 07:09:26 crc kubenswrapper[5058]: I1014 07:09:26.809210 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:09:27 crc kubenswrapper[5058]: I1014 07:09:27.298065 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:27 crc kubenswrapper[5058]: W1014 07:09:27.311947 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1194eea_529b_4873_8665_c22c2fbb2b27.slice/crio-d59aae227df0e81f1ee8161e280ed455db1bd487a88305670a7a22669a178ae1 WatchSource:0}: Error finding container d59aae227df0e81f1ee8161e280ed455db1bd487a88305670a7a22669a178ae1: Status 404 returned error can't find the container with id d59aae227df0e81f1ee8161e280ed455db1bd487a88305670a7a22669a178ae1 Oct 14 07:09:27 crc kubenswrapper[5058]: I1014 07:09:27.357360 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerStarted","Data":"d59aae227df0e81f1ee8161e280ed455db1bd487a88305670a7a22669a178ae1"} Oct 14 07:09:27 crc kubenswrapper[5058]: I1014 07:09:27.393999 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 07:09:27 crc kubenswrapper[5058]: I1014 07:09:27.724134 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 07:09:27 crc kubenswrapper[5058]: I1014 07:09:27.724142 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 07:09:28 crc kubenswrapper[5058]: I1014 07:09:28.370317 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerStarted","Data":"9f238240eb24286b76efa80a1726feea15eb1b5428f453740332097da1e38c23"} Oct 14 07:09:29 crc kubenswrapper[5058]: I1014 07:09:29.381827 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerStarted","Data":"b035b6b223e50e87be90760e1acbd080ac48c2bf873b88457542259d3a2df549"} Oct 14 07:09:30 crc kubenswrapper[5058]: I1014 07:09:30.394272 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerStarted","Data":"bbeb69e4e385c48265b40df8c15edd1fba1aacd841f9efd3e30e55d77094cf8a"} Oct 14 07:09:31 crc kubenswrapper[5058]: I1014 07:09:31.413108 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerStarted","Data":"18cf0eabc528f19571aa3342a5cfd3b332ebbf68b74fbdcb901543fbbdd1fbcc"} Oct 14 07:09:31 crc kubenswrapper[5058]: I1014 07:09:31.413616 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 07:09:31 crc kubenswrapper[5058]: I1014 07:09:31.446710 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.124368141 podStartE2EDuration="5.446686957s" podCreationTimestamp="2025-10-14 07:09:26 +0000 UTC" firstStartedPulling="2025-10-14 07:09:27.315566509 +0000 UTC m=+1315.226650315" lastFinishedPulling="2025-10-14 07:09:30.637885325 +0000 UTC m=+1318.548969131" observedRunningTime="2025-10-14 07:09:31.434293405 +0000 UTC m=+1319.345377241" watchObservedRunningTime="2025-10-14 07:09:31.446686957 +0000 UTC m=+1319.357770773" Oct 14 07:09:33 crc kubenswrapper[5058]: I1014 07:09:33.657108 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:09:33 crc kubenswrapper[5058]: I1014 07:09:33.657705 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:09:33 crc kubenswrapper[5058]: I1014 07:09:33.712887 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 07:09:34 crc kubenswrapper[5058]: E1014 07:09:34.335967 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a1389f0_13b1_4c90_8fa9_c61a088bb4d6.slice/crio-conmon-f166a159ddc4b5e5bff41474766e07018bec2ed39bfa6abf6e45d3859d643ef4.scope\": RecentStats: unable to find data in memory cache]" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.472733 5058 generic.go:334] "Generic (PLEG): container finished" podID="5be17ea0-7e76-455f-8ab5-4dc85bb834c8" containerID="70e91a24551b0735af960c2cc36eeff8a125d876bf6da56d8d6d8fff049d0832" exitCode=137 Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.472876 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5be17ea0-7e76-455f-8ab5-4dc85bb834c8","Type":"ContainerDied","Data":"70e91a24551b0735af960c2cc36eeff8a125d876bf6da56d8d6d8fff049d0832"} Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.472919 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5be17ea0-7e76-455f-8ab5-4dc85bb834c8","Type":"ContainerDied","Data":"153668b849eecad64eb1379ed9ec27ff928e7f3a24c378d2175921df9266a11e"} Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.472938 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="153668b849eecad64eb1379ed9ec27ff928e7f3a24c378d2175921df9266a11e" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.477478 5058 generic.go:334] "Generic (PLEG): container finished" podID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerID="f166a159ddc4b5e5bff41474766e07018bec2ed39bfa6abf6e45d3859d643ef4" exitCode=137 Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.477546 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6","Type":"ContainerDied","Data":"f166a159ddc4b5e5bff41474766e07018bec2ed39bfa6abf6e45d3859d643ef4"} Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.477593 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6","Type":"ContainerDied","Data":"9228a7143dd5a5f2cc023d2f3c77f34ed4106ac9d87629739c50d9a5736841e6"} Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.477618 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9228a7143dd5a5f2cc023d2f3c77f34ed4106ac9d87629739c50d9a5736841e6" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.528721 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.539147 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.654584 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt9z4\" (UniqueName: \"kubernetes.io/projected/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-kube-api-access-mt9z4\") pod \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.655774 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-config-data\") pod \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.656095 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-config-data\") pod \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.656208 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-combined-ca-bundle\") pod \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\" (UID: \"5be17ea0-7e76-455f-8ab5-4dc85bb834c8\") " Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.656249 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-logs\") pod \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.656286 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-combined-ca-bundle\") pod \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.656324 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d52bd\" (UniqueName: \"kubernetes.io/projected/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-kube-api-access-d52bd\") pod \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\" (UID: \"6a1389f0-13b1-4c90-8fa9-c61a088bb4d6\") " Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.659790 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-logs" (OuterVolumeSpecName: "logs") pod "6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" (UID: "6a1389f0-13b1-4c90-8fa9-c61a088bb4d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.661973 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-kube-api-access-mt9z4" (OuterVolumeSpecName: "kube-api-access-mt9z4") pod "5be17ea0-7e76-455f-8ab5-4dc85bb834c8" (UID: "5be17ea0-7e76-455f-8ab5-4dc85bb834c8"). InnerVolumeSpecName "kube-api-access-mt9z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.662393 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-kube-api-access-d52bd" (OuterVolumeSpecName: "kube-api-access-d52bd") pod "6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" (UID: "6a1389f0-13b1-4c90-8fa9-c61a088bb4d6"). InnerVolumeSpecName "kube-api-access-d52bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.684908 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5be17ea0-7e76-455f-8ab5-4dc85bb834c8" (UID: "5be17ea0-7e76-455f-8ab5-4dc85bb834c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.691077 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-config-data" (OuterVolumeSpecName: "config-data") pod "5be17ea0-7e76-455f-8ab5-4dc85bb834c8" (UID: "5be17ea0-7e76-455f-8ab5-4dc85bb834c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.693255 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" (UID: "6a1389f0-13b1-4c90-8fa9-c61a088bb4d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.703289 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-config-data" (OuterVolumeSpecName: "config-data") pod "6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" (UID: "6a1389f0-13b1-4c90-8fa9-c61a088bb4d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.758593 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.758625 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.758634 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.758643 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d52bd\" (UniqueName: \"kubernetes.io/projected/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-kube-api-access-d52bd\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.758652 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt9z4\" (UniqueName: \"kubernetes.io/projected/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-kube-api-access-mt9z4\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.758661 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5be17ea0-7e76-455f-8ab5-4dc85bb834c8-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:34 crc kubenswrapper[5058]: I1014 07:09:34.758689 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.492852 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.492874 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.547341 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.562227 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.576897 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.592939 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.598721 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:09:35 crc kubenswrapper[5058]: E1014 07:09:35.599212 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerName="nova-metadata-log" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.599227 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerName="nova-metadata-log" Oct 14 07:09:35 crc kubenswrapper[5058]: E1014 07:09:35.599244 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerName="nova-metadata-metadata" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.599252 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerName="nova-metadata-metadata" Oct 14 07:09:35 crc kubenswrapper[5058]: E1014 07:09:35.599291 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be17ea0-7e76-455f-8ab5-4dc85bb834c8" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.599299 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be17ea0-7e76-455f-8ab5-4dc85bb834c8" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.599536 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerName="nova-metadata-metadata" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.599550 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" containerName="nova-metadata-log" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.599563 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be17ea0-7e76-455f-8ab5-4dc85bb834c8" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.600395 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.606365 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.606455 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.606597 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.606921 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.608624 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.614151 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.614255 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.617453 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.624746 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.674539 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.674588 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.674620 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.674640 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bd66af-888c-4ff0-9297-11438e4409d1-logs\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.674670 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.674743 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.674759 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-config-data\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.674951 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.675030 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28b6\" (UniqueName: \"kubernetes.io/projected/92bd66af-888c-4ff0-9297-11438e4409d1-kube-api-access-q28b6\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.675091 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nhqk\" (UniqueName: \"kubernetes.io/projected/6771f490-1ad7-4f61-98a0-d52df29ef7c8-kube-api-access-8nhqk\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.776712 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.777875 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.777936 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.777979 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bd66af-888c-4ff0-9297-11438e4409d1-logs\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.778038 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.778137 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.778168 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-config-data\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.779527 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bd66af-888c-4ff0-9297-11438e4409d1-logs\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.780081 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.780156 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q28b6\" (UniqueName: \"kubernetes.io/projected/92bd66af-888c-4ff0-9297-11438e4409d1-kube-api-access-q28b6\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.780207 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nhqk\" (UniqueName: \"kubernetes.io/projected/6771f490-1ad7-4f61-98a0-d52df29ef7c8-kube-api-access-8nhqk\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.782941 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-config-data\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.784517 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.784532 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.784837 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.785367 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.787489 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.787760 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.808427 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q28b6\" (UniqueName: \"kubernetes.io/projected/92bd66af-888c-4ff0-9297-11438e4409d1-kube-api-access-q28b6\") pod \"nova-metadata-0\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " pod="openstack/nova-metadata-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.812193 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nhqk\" (UniqueName: \"kubernetes.io/projected/6771f490-1ad7-4f61-98a0-d52df29ef7c8-kube-api-access-8nhqk\") pod \"nova-cell1-novncproxy-0\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.934067 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:35 crc kubenswrapper[5058]: I1014 07:09:35.953893 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:09:36 crc kubenswrapper[5058]: W1014 07:09:36.499307 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6771f490_1ad7_4f61_98a0_d52df29ef7c8.slice/crio-620390d1f5623b444165eee47ddd7cc564d852ac532074bc916c9f920e6b64af WatchSource:0}: Error finding container 620390d1f5623b444165eee47ddd7cc564d852ac532074bc916c9f920e6b64af: Status 404 returned error can't find the container with id 620390d1f5623b444165eee47ddd7cc564d852ac532074bc916c9f920e6b64af Oct 14 07:09:36 crc kubenswrapper[5058]: I1014 07:09:36.507471 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:09:36 crc kubenswrapper[5058]: I1014 07:09:36.573508 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:36 crc kubenswrapper[5058]: I1014 07:09:36.688163 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 07:09:36 crc kubenswrapper[5058]: I1014 07:09:36.688853 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 07:09:36 crc kubenswrapper[5058]: I1014 07:09:36.689151 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 07:09:36 crc kubenswrapper[5058]: I1014 07:09:36.693013 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 07:09:36 crc kubenswrapper[5058]: I1014 07:09:36.815144 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be17ea0-7e76-455f-8ab5-4dc85bb834c8" path="/var/lib/kubelet/pods/5be17ea0-7e76-455f-8ab5-4dc85bb834c8/volumes" Oct 14 07:09:36 crc kubenswrapper[5058]: I1014 07:09:36.816910 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1389f0-13b1-4c90-8fa9-c61a088bb4d6" path="/var/lib/kubelet/pods/6a1389f0-13b1-4c90-8fa9-c61a088bb4d6/volumes" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.516882 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92bd66af-888c-4ff0-9297-11438e4409d1","Type":"ContainerStarted","Data":"b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e"} Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.517157 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92bd66af-888c-4ff0-9297-11438e4409d1","Type":"ContainerStarted","Data":"ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd"} Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.517171 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92bd66af-888c-4ff0-9297-11438e4409d1","Type":"ContainerStarted","Data":"2d340a8bd07ee18a25eaa3d957a509a56f0e25242f08f7b6e33cf626c347e99e"} Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.519004 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6771f490-1ad7-4f61-98a0-d52df29ef7c8","Type":"ContainerStarted","Data":"524cb89b86eddc16eadfdb3dea364ca4de5b74f0c46bcb1045cef2074f40a15e"} Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.519347 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.519393 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6771f490-1ad7-4f61-98a0-d52df29ef7c8","Type":"ContainerStarted","Data":"620390d1f5623b444165eee47ddd7cc564d852ac532074bc916c9f920e6b64af"} Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.524205 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.555065 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.55503723 podStartE2EDuration="2.55503723s" podCreationTimestamp="2025-10-14 07:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:37.545335564 +0000 UTC m=+1325.456419400" watchObservedRunningTime="2025-10-14 07:09:37.55503723 +0000 UTC m=+1325.466121066" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.575398 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.575365198 podStartE2EDuration="2.575365198s" podCreationTimestamp="2025-10-14 07:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:37.573646649 +0000 UTC m=+1325.484730455" watchObservedRunningTime="2025-10-14 07:09:37.575365198 +0000 UTC m=+1325.486448994" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.739996 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65bf758599-p6h9m"] Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.747412 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.757063 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65bf758599-p6h9m"] Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.841777 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-sb\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.841830 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-config\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.841955 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-swift-storage-0\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.842180 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67h8x\" (UniqueName: \"kubernetes.io/projected/c560dc37-2c84-4468-800c-d90d8f8c158e-kube-api-access-67h8x\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.842234 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-nb\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.842330 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-svc\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.944051 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67h8x\" (UniqueName: \"kubernetes.io/projected/c560dc37-2c84-4468-800c-d90d8f8c158e-kube-api-access-67h8x\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.944123 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-nb\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.944194 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-svc\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.944313 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-sb\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.944338 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-config\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.944373 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-swift-storage-0\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.945279 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-sb\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.945344 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-nb\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.945496 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-swift-storage-0\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.945760 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-config\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.945938 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-svc\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:37 crc kubenswrapper[5058]: I1014 07:09:37.966031 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67h8x\" (UniqueName: \"kubernetes.io/projected/c560dc37-2c84-4468-800c-d90d8f8c158e-kube-api-access-67h8x\") pod \"dnsmasq-dns-65bf758599-p6h9m\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:38 crc kubenswrapper[5058]: I1014 07:09:38.078120 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:38 crc kubenswrapper[5058]: I1014 07:09:38.588467 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65bf758599-p6h9m"] Oct 14 07:09:38 crc kubenswrapper[5058]: W1014 07:09:38.598758 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc560dc37_2c84_4468_800c_d90d8f8c158e.slice/crio-1d8ca77affb1c53189d5dd01162ae1ce7a88de22e8f47b3d12c6cc48947662d5 WatchSource:0}: Error finding container 1d8ca77affb1c53189d5dd01162ae1ce7a88de22e8f47b3d12c6cc48947662d5: Status 404 returned error can't find the container with id 1d8ca77affb1c53189d5dd01162ae1ce7a88de22e8f47b3d12c6cc48947662d5 Oct 14 07:09:39 crc kubenswrapper[5058]: I1014 07:09:39.537009 5058 generic.go:334] "Generic (PLEG): container finished" podID="c560dc37-2c84-4468-800c-d90d8f8c158e" containerID="b5ced10e8e9dc43761eac804b7a8665e7ec213ce5e5cabd7f244dd3594e1902c" exitCode=0 Oct 14 07:09:39 crc kubenswrapper[5058]: I1014 07:09:39.537117 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" event={"ID":"c560dc37-2c84-4468-800c-d90d8f8c158e","Type":"ContainerDied","Data":"b5ced10e8e9dc43761eac804b7a8665e7ec213ce5e5cabd7f244dd3594e1902c"} Oct 14 07:09:39 crc kubenswrapper[5058]: I1014 07:09:39.537408 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" event={"ID":"c560dc37-2c84-4468-800c-d90d8f8c158e","Type":"ContainerStarted","Data":"1d8ca77affb1c53189d5dd01162ae1ce7a88de22e8f47b3d12c6cc48947662d5"} Oct 14 07:09:39 crc kubenswrapper[5058]: I1014 07:09:39.631616 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:39 crc kubenswrapper[5058]: I1014 07:09:39.632014 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="ceilometer-central-agent" containerID="cri-o://9f238240eb24286b76efa80a1726feea15eb1b5428f453740332097da1e38c23" gracePeriod=30 Oct 14 07:09:39 crc kubenswrapper[5058]: I1014 07:09:39.632529 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="proxy-httpd" containerID="cri-o://18cf0eabc528f19571aa3342a5cfd3b332ebbf68b74fbdcb901543fbbdd1fbcc" gracePeriod=30 Oct 14 07:09:39 crc kubenswrapper[5058]: I1014 07:09:39.632660 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="sg-core" containerID="cri-o://bbeb69e4e385c48265b40df8c15edd1fba1aacd841f9efd3e30e55d77094cf8a" gracePeriod=30 Oct 14 07:09:39 crc kubenswrapper[5058]: I1014 07:09:39.632757 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="ceilometer-notification-agent" containerID="cri-o://b035b6b223e50e87be90760e1acbd080ac48c2bf873b88457542259d3a2df549" gracePeriod=30 Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.272089 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.550576 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" event={"ID":"c560dc37-2c84-4468-800c-d90d8f8c158e","Type":"ContainerStarted","Data":"2182c2e8effad0274964ae00b87d07ae2b2c94468e266f8b6506ca45b077abb0"} Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.550835 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559082 5058 generic.go:334] "Generic (PLEG): container finished" podID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerID="18cf0eabc528f19571aa3342a5cfd3b332ebbf68b74fbdcb901543fbbdd1fbcc" exitCode=0 Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559109 5058 generic.go:334] "Generic (PLEG): container finished" podID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerID="bbeb69e4e385c48265b40df8c15edd1fba1aacd841f9efd3e30e55d77094cf8a" exitCode=2 Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559120 5058 generic.go:334] "Generic (PLEG): container finished" podID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerID="b035b6b223e50e87be90760e1acbd080ac48c2bf873b88457542259d3a2df549" exitCode=0 Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559130 5058 generic.go:334] "Generic (PLEG): container finished" podID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerID="9f238240eb24286b76efa80a1726feea15eb1b5428f453740332097da1e38c23" exitCode=0 Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559306 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-log" containerID="cri-o://1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5" gracePeriod=30 Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559581 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerDied","Data":"18cf0eabc528f19571aa3342a5cfd3b332ebbf68b74fbdcb901543fbbdd1fbcc"} Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559610 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerDied","Data":"bbeb69e4e385c48265b40df8c15edd1fba1aacd841f9efd3e30e55d77094cf8a"} Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559622 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerDied","Data":"b035b6b223e50e87be90760e1acbd080ac48c2bf873b88457542259d3a2df549"} Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerDied","Data":"9f238240eb24286b76efa80a1726feea15eb1b5428f453740332097da1e38c23"} Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.559691 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-api" containerID="cri-o://933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec" gracePeriod=30 Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.584866 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" podStartSLOduration=3.584847164 podStartE2EDuration="3.584847164s" podCreationTimestamp="2025-10-14 07:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:40.574009336 +0000 UTC m=+1328.485093152" watchObservedRunningTime="2025-10-14 07:09:40.584847164 +0000 UTC m=+1328.495930980" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.621600 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.708282 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-scripts\") pod \"c1194eea-529b-4873-8665-c22c2fbb2b27\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.708337 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjrcm\" (UniqueName: \"kubernetes.io/projected/c1194eea-529b-4873-8665-c22c2fbb2b27-kube-api-access-jjrcm\") pod \"c1194eea-529b-4873-8665-c22c2fbb2b27\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.708401 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-combined-ca-bundle\") pod \"c1194eea-529b-4873-8665-c22c2fbb2b27\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.708429 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-ceilometer-tls-certs\") pod \"c1194eea-529b-4873-8665-c22c2fbb2b27\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.708464 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-config-data\") pod \"c1194eea-529b-4873-8665-c22c2fbb2b27\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.708491 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-sg-core-conf-yaml\") pod \"c1194eea-529b-4873-8665-c22c2fbb2b27\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.708558 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-log-httpd\") pod \"c1194eea-529b-4873-8665-c22c2fbb2b27\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.708721 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-run-httpd\") pod \"c1194eea-529b-4873-8665-c22c2fbb2b27\" (UID: \"c1194eea-529b-4873-8665-c22c2fbb2b27\") " Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.709201 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1194eea-529b-4873-8665-c22c2fbb2b27" (UID: "c1194eea-529b-4873-8665-c22c2fbb2b27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.709679 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.709990 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1194eea-529b-4873-8665-c22c2fbb2b27" (UID: "c1194eea-529b-4873-8665-c22c2fbb2b27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.714428 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1194eea-529b-4873-8665-c22c2fbb2b27-kube-api-access-jjrcm" (OuterVolumeSpecName: "kube-api-access-jjrcm") pod "c1194eea-529b-4873-8665-c22c2fbb2b27" (UID: "c1194eea-529b-4873-8665-c22c2fbb2b27"). InnerVolumeSpecName "kube-api-access-jjrcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.716032 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-scripts" (OuterVolumeSpecName: "scripts") pod "c1194eea-529b-4873-8665-c22c2fbb2b27" (UID: "c1194eea-529b-4873-8665-c22c2fbb2b27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.754941 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1194eea-529b-4873-8665-c22c2fbb2b27" (UID: "c1194eea-529b-4873-8665-c22c2fbb2b27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.781071 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c1194eea-529b-4873-8665-c22c2fbb2b27" (UID: "c1194eea-529b-4873-8665-c22c2fbb2b27"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.811086 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1194eea-529b-4873-8665-c22c2fbb2b27-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.811128 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjrcm\" (UniqueName: \"kubernetes.io/projected/c1194eea-529b-4873-8665-c22c2fbb2b27-kube-api-access-jjrcm\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.811140 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.811153 5058 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.811164 5058 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.818312 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1194eea-529b-4873-8665-c22c2fbb2b27" (UID: "c1194eea-529b-4873-8665-c22c2fbb2b27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.839620 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-config-data" (OuterVolumeSpecName: "config-data") pod "c1194eea-529b-4873-8665-c22c2fbb2b27" (UID: "c1194eea-529b-4873-8665-c22c2fbb2b27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.912568 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.912619 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1194eea-529b-4873-8665-c22c2fbb2b27-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.934743 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.954123 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 07:09:40 crc kubenswrapper[5058]: I1014 07:09:40.954203 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.574282 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1194eea-529b-4873-8665-c22c2fbb2b27","Type":"ContainerDied","Data":"d59aae227df0e81f1ee8161e280ed455db1bd487a88305670a7a22669a178ae1"} Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.574343 5058 scope.go:117] "RemoveContainer" containerID="18cf0eabc528f19571aa3342a5cfd3b332ebbf68b74fbdcb901543fbbdd1fbcc" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.574353 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.576799 5058 generic.go:334] "Generic (PLEG): container finished" podID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerID="1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5" exitCode=143 Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.576831 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce8bdf24-6ff3-4644-a102-840f9a8f68bd","Type":"ContainerDied","Data":"1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5"} Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.615584 5058 scope.go:117] "RemoveContainer" containerID="bbeb69e4e385c48265b40df8c15edd1fba1aacd841f9efd3e30e55d77094cf8a" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.646323 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.657851 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.663452 5058 scope.go:117] "RemoveContainer" containerID="b035b6b223e50e87be90760e1acbd080ac48c2bf873b88457542259d3a2df549" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.668533 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:41 crc kubenswrapper[5058]: E1014 07:09:41.668995 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="sg-core" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.669015 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="sg-core" Oct 14 07:09:41 crc kubenswrapper[5058]: E1014 07:09:41.669035 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="proxy-httpd" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.669044 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="proxy-httpd" Oct 14 07:09:41 crc kubenswrapper[5058]: E1014 07:09:41.669072 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="ceilometer-central-agent" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.669081 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="ceilometer-central-agent" Oct 14 07:09:41 crc kubenswrapper[5058]: E1014 07:09:41.669093 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="ceilometer-notification-agent" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.669101 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="ceilometer-notification-agent" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.669332 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="proxy-httpd" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.669353 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="ceilometer-notification-agent" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.669371 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="sg-core" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.669382 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" containerName="ceilometer-central-agent" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.671559 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.675125 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.675159 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.677018 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.677230 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.696289 5058 scope.go:117] "RemoveContainer" containerID="9f238240eb24286b76efa80a1726feea15eb1b5428f453740332097da1e38c23" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.831339 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-log-httpd\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.831407 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-scripts\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.831440 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.831473 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.831500 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b4hp\" (UniqueName: \"kubernetes.io/projected/42219495-c664-4b9f-a5a7-f66b0bea105a-kube-api-access-7b4hp\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.831527 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-config-data\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.831758 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-run-httpd\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.831815 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.933659 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-log-httpd\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.933734 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-scripts\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.933768 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.933804 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.933872 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b4hp\" (UniqueName: \"kubernetes.io/projected/42219495-c664-4b9f-a5a7-f66b0bea105a-kube-api-access-7b4hp\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.933897 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-config-data\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.933971 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-run-httpd\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.933994 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.934184 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-log-httpd\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.935212 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-run-httpd\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.941002 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.945610 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-scripts\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.947551 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.948574 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.950036 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-config-data\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.951911 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b4hp\" (UniqueName: \"kubernetes.io/projected/42219495-c664-4b9f-a5a7-f66b0bea105a-kube-api-access-7b4hp\") pod \"ceilometer-0\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " pod="openstack/ceilometer-0" Oct 14 07:09:41 crc kubenswrapper[5058]: I1014 07:09:41.999534 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:09:42 crc kubenswrapper[5058]: I1014 07:09:42.448557 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:09:42 crc kubenswrapper[5058]: W1014 07:09:42.458032 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42219495_c664_4b9f_a5a7_f66b0bea105a.slice/crio-a29da43231afc29a7824d7f94710dc2bbc75e4c4e8c6108dd9e4063cd8abcd29 WatchSource:0}: Error finding container a29da43231afc29a7824d7f94710dc2bbc75e4c4e8c6108dd9e4063cd8abcd29: Status 404 returned error can't find the container with id a29da43231afc29a7824d7f94710dc2bbc75e4c4e8c6108dd9e4063cd8abcd29 Oct 14 07:09:42 crc kubenswrapper[5058]: I1014 07:09:42.589015 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerStarted","Data":"a29da43231afc29a7824d7f94710dc2bbc75e4c4e8c6108dd9e4063cd8abcd29"} Oct 14 07:09:42 crc kubenswrapper[5058]: I1014 07:09:42.803614 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1194eea-529b-4873-8665-c22c2fbb2b27" path="/var/lib/kubelet/pods/c1194eea-529b-4873-8665-c22c2fbb2b27/volumes" Oct 14 07:09:43 crc kubenswrapper[5058]: I1014 07:09:43.606860 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerStarted","Data":"133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80"} Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.118129 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.279328 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-config-data\") pod \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.279458 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s86w\" (UniqueName: \"kubernetes.io/projected/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-kube-api-access-2s86w\") pod \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.279513 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-logs\") pod \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.279596 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-combined-ca-bundle\") pod \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\" (UID: \"ce8bdf24-6ff3-4644-a102-840f9a8f68bd\") " Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.280889 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-logs" (OuterVolumeSpecName: "logs") pod "ce8bdf24-6ff3-4644-a102-840f9a8f68bd" (UID: "ce8bdf24-6ff3-4644-a102-840f9a8f68bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.304556 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-kube-api-access-2s86w" (OuterVolumeSpecName: "kube-api-access-2s86w") pod "ce8bdf24-6ff3-4644-a102-840f9a8f68bd" (UID: "ce8bdf24-6ff3-4644-a102-840f9a8f68bd"). InnerVolumeSpecName "kube-api-access-2s86w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.318842 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce8bdf24-6ff3-4644-a102-840f9a8f68bd" (UID: "ce8bdf24-6ff3-4644-a102-840f9a8f68bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.333370 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-config-data" (OuterVolumeSpecName: "config-data") pod "ce8bdf24-6ff3-4644-a102-840f9a8f68bd" (UID: "ce8bdf24-6ff3-4644-a102-840f9a8f68bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.383173 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.383207 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.383217 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s86w\" (UniqueName: \"kubernetes.io/projected/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-kube-api-access-2s86w\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.383227 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce8bdf24-6ff3-4644-a102-840f9a8f68bd-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.625469 5058 generic.go:334] "Generic (PLEG): container finished" podID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerID="933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec" exitCode=0 Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.625947 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce8bdf24-6ff3-4644-a102-840f9a8f68bd","Type":"ContainerDied","Data":"933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec"} Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.625991 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce8bdf24-6ff3-4644-a102-840f9a8f68bd","Type":"ContainerDied","Data":"6a8a99f1036287097dad26f12314dd1bb43d5b61b8ba7d1aec02a8dac3177584"} Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.626012 5058 scope.go:117] "RemoveContainer" containerID="933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.626009 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.631907 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerStarted","Data":"13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09"} Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.665919 5058 scope.go:117] "RemoveContainer" containerID="1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.670379 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.691916 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.699491 5058 scope.go:117] "RemoveContainer" containerID="933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec" Oct 14 07:09:44 crc kubenswrapper[5058]: E1014 07:09:44.701117 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec\": container with ID starting with 933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec not found: ID does not exist" containerID="933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.701166 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec"} err="failed to get container status \"933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec\": rpc error: code = NotFound desc = could not find container \"933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec\": container with ID starting with 933e2a8c3e9b0c2e0724508a226f6ad49587ced8e18be5c81a2491dafd2fc4ec not found: ID does not exist" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.701193 5058 scope.go:117] "RemoveContainer" containerID="1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5" Oct 14 07:09:44 crc kubenswrapper[5058]: E1014 07:09:44.701888 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5\": container with ID starting with 1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5 not found: ID does not exist" containerID="1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.701947 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5"} err="failed to get container status \"1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5\": rpc error: code = NotFound desc = could not find container \"1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5\": container with ID starting with 1695530729bfc30e7978df85bc5a8d267a1a116e13c6562cbcba9980d0fce5c5 not found: ID does not exist" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.703880 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:44 crc kubenswrapper[5058]: E1014 07:09:44.704335 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-api" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.704359 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-api" Oct 14 07:09:44 crc kubenswrapper[5058]: E1014 07:09:44.704381 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-log" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.704390 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-log" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.704645 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-log" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.704672 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" containerName="nova-api-api" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.706080 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.709588 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.709768 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.709922 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.724181 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.798379 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.799358 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-config-data\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.799382 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-public-tls-certs\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.799413 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z42lc\" (UniqueName: \"kubernetes.io/projected/485aff60-bcba-4aee-818c-de28e26964e2-kube-api-access-z42lc\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.799511 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485aff60-bcba-4aee-818c-de28e26964e2-logs\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.799563 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.824318 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce8bdf24-6ff3-4644-a102-840f9a8f68bd" path="/var/lib/kubelet/pods/ce8bdf24-6ff3-4644-a102-840f9a8f68bd/volumes" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.901608 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485aff60-bcba-4aee-818c-de28e26964e2-logs\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.901700 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.901854 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.901891 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-config-data\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.901912 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-public-tls-certs\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.901955 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z42lc\" (UniqueName: \"kubernetes.io/projected/485aff60-bcba-4aee-818c-de28e26964e2-kube-api-access-z42lc\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.902104 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485aff60-bcba-4aee-818c-de28e26964e2-logs\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.907591 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-config-data\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.909250 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.909637 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.910361 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-public-tls-certs\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:44 crc kubenswrapper[5058]: I1014 07:09:44.924326 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z42lc\" (UniqueName: \"kubernetes.io/projected/485aff60-bcba-4aee-818c-de28e26964e2-kube-api-access-z42lc\") pod \"nova-api-0\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " pod="openstack/nova-api-0" Oct 14 07:09:45 crc kubenswrapper[5058]: I1014 07:09:45.026062 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:09:45 crc kubenswrapper[5058]: I1014 07:09:45.489895 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:45 crc kubenswrapper[5058]: I1014 07:09:45.648761 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"485aff60-bcba-4aee-818c-de28e26964e2","Type":"ContainerStarted","Data":"99a56662eae079fc7c8488cfe51882e12b2af909873bc3d675f999f61a14d3ce"} Oct 14 07:09:45 crc kubenswrapper[5058]: I1014 07:09:45.655577 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerStarted","Data":"6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c"} Oct 14 07:09:45 crc kubenswrapper[5058]: I1014 07:09:45.935667 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:45 crc kubenswrapper[5058]: I1014 07:09:45.953282 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:45 crc kubenswrapper[5058]: I1014 07:09:45.956185 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 07:09:45 crc kubenswrapper[5058]: I1014 07:09:45.956254 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.694632 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"485aff60-bcba-4aee-818c-de28e26964e2","Type":"ContainerStarted","Data":"e4d7518b85947d26c6171f1eab2950b6d3234d20c8173fbf3e81eb1f4e60fb15"} Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.695030 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"485aff60-bcba-4aee-818c-de28e26964e2","Type":"ContainerStarted","Data":"2f46d25b77e87986761969ede4c3b2a2be4d314e5f92ff4d72e882b69e2161e3"} Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.730357 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.730327623 podStartE2EDuration="2.730327623s" podCreationTimestamp="2025-10-14 07:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:46.712520357 +0000 UTC m=+1334.623604183" watchObservedRunningTime="2025-10-14 07:09:46.730327623 +0000 UTC m=+1334.641411439" Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.733340 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.914014 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-glrbj"] Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.919198 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.923819 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.924057 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.934363 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-glrbj"] Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.966205 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 07:09:46 crc kubenswrapper[5058]: I1014 07:09:46.966303 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.061913 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-config-data\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.062278 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp52b\" (UniqueName: \"kubernetes.io/projected/daf2714b-6496-498c-8027-b41adcda1a41-kube-api-access-cp52b\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.062637 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-scripts\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.062691 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.164971 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-config-data\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.165044 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp52b\" (UniqueName: \"kubernetes.io/projected/daf2714b-6496-498c-8027-b41adcda1a41-kube-api-access-cp52b\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.165113 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-scripts\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.165132 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.170447 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-config-data\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.171217 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.173454 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-scripts\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.188417 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp52b\" (UniqueName: \"kubernetes.io/projected/daf2714b-6496-498c-8027-b41adcda1a41-kube-api-access-cp52b\") pod \"nova-cell1-cell-mapping-glrbj\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.333963 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.706155 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerStarted","Data":"45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190"} Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.835567 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.487876527 podStartE2EDuration="6.835547208s" podCreationTimestamp="2025-10-14 07:09:41 +0000 UTC" firstStartedPulling="2025-10-14 07:09:42.459945547 +0000 UTC m=+1330.371029353" lastFinishedPulling="2025-10-14 07:09:46.807616228 +0000 UTC m=+1334.718700034" observedRunningTime="2025-10-14 07:09:47.737993276 +0000 UTC m=+1335.649077102" watchObservedRunningTime="2025-10-14 07:09:47.835547208 +0000 UTC m=+1335.746631024" Oct 14 07:09:47 crc kubenswrapper[5058]: I1014 07:09:47.836340 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-glrbj"] Oct 14 07:09:48 crc kubenswrapper[5058]: I1014 07:09:48.080887 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:09:48 crc kubenswrapper[5058]: I1014 07:09:48.157684 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d74749bf5-4vkd7"] Oct 14 07:09:48 crc kubenswrapper[5058]: I1014 07:09:48.157976 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" podUID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" containerName="dnsmasq-dns" containerID="cri-o://a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff" gracePeriod=10 Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.715421 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.721741 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-glrbj" event={"ID":"daf2714b-6496-498c-8027-b41adcda1a41","Type":"ContainerStarted","Data":"f4ac6e8b23e28b50b8cab485e5f74d65c0c304c304bce434d559be066174b01f"} Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.721845 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-glrbj" event={"ID":"daf2714b-6496-498c-8027-b41adcda1a41","Type":"ContainerStarted","Data":"4536473df688b41d11ee60f7650ccf178bd701fe36a8484324f248456f43afdc"} Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.750531 5058 generic.go:334] "Generic (PLEG): container finished" podID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" containerID="a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff" exitCode=0 Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.750618 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" event={"ID":"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263","Type":"ContainerDied","Data":"a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff"} Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.750678 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.750683 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" event={"ID":"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263","Type":"ContainerDied","Data":"c9b3206bad8e61e787958faadb342b400d8dde49cfaf1672d672281cf0bc1ef2"} Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.750719 5058 scope.go:117] "RemoveContainer" containerID="a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.751533 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.792640 5058 scope.go:117] "RemoveContainer" containerID="d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.796714 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-swift-storage-0\") pod \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.796874 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-sb\") pod \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.796939 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-nb\") pod \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.796968 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llvgr\" (UniqueName: \"kubernetes.io/projected/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-kube-api-access-llvgr\") pod \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.797320 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-config\") pod \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.797349 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-svc\") pod \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\" (UID: \"1b456d32-fcc3-4df4-b3cd-0b1ce6c83263\") " Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.807149 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-kube-api-access-llvgr" (OuterVolumeSpecName: "kube-api-access-llvgr") pod "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" (UID: "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263"). InnerVolumeSpecName "kube-api-access-llvgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.845925 5058 scope.go:117] "RemoveContainer" containerID="a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.869265 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" (UID: "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:49 crc kubenswrapper[5058]: E1014 07:09:48.871841 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff\": container with ID starting with a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff not found: ID does not exist" containerID="a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.871897 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff"} err="failed to get container status \"a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff\": rpc error: code = NotFound desc = could not find container \"a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff\": container with ID starting with a9c6b0ff45422fea52a6c821e86cb4aba93e9848fcb427f24bf45156e9abbcff not found: ID does not exist" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.871922 5058 scope.go:117] "RemoveContainer" containerID="d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea" Oct 14 07:09:49 crc kubenswrapper[5058]: E1014 07:09:48.873580 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea\": container with ID starting with d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea not found: ID does not exist" containerID="d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.873620 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea"} err="failed to get container status \"d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea\": rpc error: code = NotFound desc = could not find container \"d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea\": container with ID starting with d223eee494f7b7d200c59b8f7a90889044b405024f46ce703076ab6c9177d5ea not found: ID does not exist" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.874671 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" (UID: "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.893769 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-config" (OuterVolumeSpecName: "config") pod "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" (UID: "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.898360 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" (UID: "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.899774 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.899788 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.899905 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llvgr\" (UniqueName: \"kubernetes.io/projected/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-kube-api-access-llvgr\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.899919 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.899929 5058 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:48.900407 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" (UID: "1b456d32-fcc3-4df4-b3cd-0b1ce6c83263"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:49.002020 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:49.087680 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-glrbj" podStartSLOduration=3.087654868 podStartE2EDuration="3.087654868s" podCreationTimestamp="2025-10-14 07:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:09:48.785648426 +0000 UTC m=+1336.696732242" watchObservedRunningTime="2025-10-14 07:09:49.087654868 +0000 UTC m=+1336.998738674" Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:49.093778 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d74749bf5-4vkd7"] Oct 14 07:09:49 crc kubenswrapper[5058]: I1014 07:09:49.101489 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d74749bf5-4vkd7"] Oct 14 07:09:50 crc kubenswrapper[5058]: I1014 07:09:50.800003 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" path="/var/lib/kubelet/pods/1b456d32-fcc3-4df4-b3cd-0b1ce6c83263/volumes" Oct 14 07:09:52 crc kubenswrapper[5058]: I1014 07:09:52.802254 5058 generic.go:334] "Generic (PLEG): container finished" podID="daf2714b-6496-498c-8027-b41adcda1a41" containerID="f4ac6e8b23e28b50b8cab485e5f74d65c0c304c304bce434d559be066174b01f" exitCode=0 Oct 14 07:09:52 crc kubenswrapper[5058]: I1014 07:09:52.811389 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-glrbj" event={"ID":"daf2714b-6496-498c-8027-b41adcda1a41","Type":"ContainerDied","Data":"f4ac6e8b23e28b50b8cab485e5f74d65c0c304c304bce434d559be066174b01f"} Oct 14 07:09:53 crc kubenswrapper[5058]: I1014 07:09:53.670568 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-d74749bf5-4vkd7" podUID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: i/o timeout" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.174578 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.212299 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-config-data\") pod \"daf2714b-6496-498c-8027-b41adcda1a41\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.212356 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-combined-ca-bundle\") pod \"daf2714b-6496-498c-8027-b41adcda1a41\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.212432 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-scripts\") pod \"daf2714b-6496-498c-8027-b41adcda1a41\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.212512 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp52b\" (UniqueName: \"kubernetes.io/projected/daf2714b-6496-498c-8027-b41adcda1a41-kube-api-access-cp52b\") pod \"daf2714b-6496-498c-8027-b41adcda1a41\" (UID: \"daf2714b-6496-498c-8027-b41adcda1a41\") " Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.218595 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf2714b-6496-498c-8027-b41adcda1a41-kube-api-access-cp52b" (OuterVolumeSpecName: "kube-api-access-cp52b") pod "daf2714b-6496-498c-8027-b41adcda1a41" (UID: "daf2714b-6496-498c-8027-b41adcda1a41"). InnerVolumeSpecName "kube-api-access-cp52b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.219998 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-scripts" (OuterVolumeSpecName: "scripts") pod "daf2714b-6496-498c-8027-b41adcda1a41" (UID: "daf2714b-6496-498c-8027-b41adcda1a41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.245455 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "daf2714b-6496-498c-8027-b41adcda1a41" (UID: "daf2714b-6496-498c-8027-b41adcda1a41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.260350 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-config-data" (OuterVolumeSpecName: "config-data") pod "daf2714b-6496-498c-8027-b41adcda1a41" (UID: "daf2714b-6496-498c-8027-b41adcda1a41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.327283 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.327332 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.327348 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf2714b-6496-498c-8027-b41adcda1a41-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.327359 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp52b\" (UniqueName: \"kubernetes.io/projected/daf2714b-6496-498c-8027-b41adcda1a41-kube-api-access-cp52b\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.825327 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-glrbj" event={"ID":"daf2714b-6496-498c-8027-b41adcda1a41","Type":"ContainerDied","Data":"4536473df688b41d11ee60f7650ccf178bd701fe36a8484324f248456f43afdc"} Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.825370 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4536473df688b41d11ee60f7650ccf178bd701fe36a8484324f248456f43afdc" Oct 14 07:09:54 crc kubenswrapper[5058]: I1014 07:09:54.825431 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-glrbj" Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.027154 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.027655 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.051352 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.067076 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.067339 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="59099423-0d1d-46bb-97f7-22fb9099467b" containerName="nova-scheduler-scheduler" containerID="cri-o://4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746" gracePeriod=30 Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.139276 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.139535 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-log" containerID="cri-o://ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd" gracePeriod=30 Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.139640 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-metadata" containerID="cri-o://b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e" gracePeriod=30 Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.837333 5058 generic.go:334] "Generic (PLEG): container finished" podID="92bd66af-888c-4ff0-9297-11438e4409d1" containerID="ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd" exitCode=143 Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.837404 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92bd66af-888c-4ff0-9297-11438e4409d1","Type":"ContainerDied","Data":"ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd"} Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.837878 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-log" containerID="cri-o://2f46d25b77e87986761969ede4c3b2a2be4d314e5f92ff4d72e882b69e2161e3" gracePeriod=30 Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.837957 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-api" containerID="cri-o://e4d7518b85947d26c6171f1eab2950b6d3234d20c8173fbf3e81eb1f4e60fb15" gracePeriod=30 Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.843584 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": EOF" Oct 14 07:09:55 crc kubenswrapper[5058]: I1014 07:09:55.845824 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": EOF" Oct 14 07:09:56 crc kubenswrapper[5058]: E1014 07:09:56.677330 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 07:09:56 crc kubenswrapper[5058]: E1014 07:09:56.678932 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 07:09:56 crc kubenswrapper[5058]: E1014 07:09:56.680376 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 07:09:56 crc kubenswrapper[5058]: E1014 07:09:56.680422 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="59099423-0d1d-46bb-97f7-22fb9099467b" containerName="nova-scheduler-scheduler" Oct 14 07:09:56 crc kubenswrapper[5058]: I1014 07:09:56.850112 5058 generic.go:334] "Generic (PLEG): container finished" podID="485aff60-bcba-4aee-818c-de28e26964e2" containerID="2f46d25b77e87986761969ede4c3b2a2be4d314e5f92ff4d72e882b69e2161e3" exitCode=143 Oct 14 07:09:56 crc kubenswrapper[5058]: I1014 07:09:56.850164 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"485aff60-bcba-4aee-818c-de28e26964e2","Type":"ContainerDied","Data":"2f46d25b77e87986761969ede4c3b2a2be4d314e5f92ff4d72e882b69e2161e3"} Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.790947 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.869743 5058 generic.go:334] "Generic (PLEG): container finished" podID="92bd66af-888c-4ff0-9297-11438e4409d1" containerID="b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e" exitCode=0 Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.869844 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.869836 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92bd66af-888c-4ff0-9297-11438e4409d1","Type":"ContainerDied","Data":"b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e"} Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.870840 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92bd66af-888c-4ff0-9297-11438e4409d1","Type":"ContainerDied","Data":"2d340a8bd07ee18a25eaa3d957a509a56f0e25242f08f7b6e33cf626c347e99e"} Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.870927 5058 scope.go:117] "RemoveContainer" containerID="b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.893646 5058 scope.go:117] "RemoveContainer" containerID="ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.914672 5058 scope.go:117] "RemoveContainer" containerID="b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e" Oct 14 07:09:58 crc kubenswrapper[5058]: E1014 07:09:58.915091 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e\": container with ID starting with b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e not found: ID does not exist" containerID="b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.915134 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e"} err="failed to get container status \"b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e\": rpc error: code = NotFound desc = could not find container \"b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e\": container with ID starting with b58de265a4cb1a33aea0fc2e642b03edc82f59f3b636d64d544bb9827950553e not found: ID does not exist" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.915164 5058 scope.go:117] "RemoveContainer" containerID="ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd" Oct 14 07:09:58 crc kubenswrapper[5058]: E1014 07:09:58.915470 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd\": container with ID starting with ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd not found: ID does not exist" containerID="ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.915499 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd"} err="failed to get container status \"ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd\": rpc error: code = NotFound desc = could not find container \"ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd\": container with ID starting with ab96575c4661705958cbdf957b92526467568fda80e1a0246e355b75652d1dbd not found: ID does not exist" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.918324 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-config-data\") pod \"92bd66af-888c-4ff0-9297-11438e4409d1\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.918417 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bd66af-888c-4ff0-9297-11438e4409d1-logs\") pod \"92bd66af-888c-4ff0-9297-11438e4409d1\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.918532 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-nova-metadata-tls-certs\") pod \"92bd66af-888c-4ff0-9297-11438e4409d1\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.918549 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q28b6\" (UniqueName: \"kubernetes.io/projected/92bd66af-888c-4ff0-9297-11438e4409d1-kube-api-access-q28b6\") pod \"92bd66af-888c-4ff0-9297-11438e4409d1\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.918651 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-combined-ca-bundle\") pod \"92bd66af-888c-4ff0-9297-11438e4409d1\" (UID: \"92bd66af-888c-4ff0-9297-11438e4409d1\") " Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.919186 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92bd66af-888c-4ff0-9297-11438e4409d1-logs" (OuterVolumeSpecName: "logs") pod "92bd66af-888c-4ff0-9297-11438e4409d1" (UID: "92bd66af-888c-4ff0-9297-11438e4409d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.927025 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bd66af-888c-4ff0-9297-11438e4409d1-kube-api-access-q28b6" (OuterVolumeSpecName: "kube-api-access-q28b6") pod "92bd66af-888c-4ff0-9297-11438e4409d1" (UID: "92bd66af-888c-4ff0-9297-11438e4409d1"). InnerVolumeSpecName "kube-api-access-q28b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.950440 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-config-data" (OuterVolumeSpecName: "config-data") pod "92bd66af-888c-4ff0-9297-11438e4409d1" (UID: "92bd66af-888c-4ff0-9297-11438e4409d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.957957 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92bd66af-888c-4ff0-9297-11438e4409d1" (UID: "92bd66af-888c-4ff0-9297-11438e4409d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:58 crc kubenswrapper[5058]: I1014 07:09:58.979097 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "92bd66af-888c-4ff0-9297-11438e4409d1" (UID: "92bd66af-888c-4ff0-9297-11438e4409d1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.021084 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.021121 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.021132 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bd66af-888c-4ff0-9297-11438e4409d1-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.021140 5058 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bd66af-888c-4ff0-9297-11438e4409d1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.021151 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q28b6\" (UniqueName: \"kubernetes.io/projected/92bd66af-888c-4ff0-9297-11438e4409d1-kube-api-access-q28b6\") on node \"crc\" DevicePath \"\"" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.204134 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.212081 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.235596 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:59 crc kubenswrapper[5058]: E1014 07:09:59.236062 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-log" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.236085 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-log" Oct 14 07:09:59 crc kubenswrapper[5058]: E1014 07:09:59.236098 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" containerName="init" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.236105 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" containerName="init" Oct 14 07:09:59 crc kubenswrapper[5058]: E1014 07:09:59.236114 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" containerName="dnsmasq-dns" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.236120 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" containerName="dnsmasq-dns" Oct 14 07:09:59 crc kubenswrapper[5058]: E1014 07:09:59.236129 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf2714b-6496-498c-8027-b41adcda1a41" containerName="nova-manage" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.236138 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf2714b-6496-498c-8027-b41adcda1a41" containerName="nova-manage" Oct 14 07:09:59 crc kubenswrapper[5058]: E1014 07:09:59.236165 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-metadata" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.236171 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-metadata" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.236391 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf2714b-6496-498c-8027-b41adcda1a41" containerName="nova-manage" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.236413 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-metadata" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.236426 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" containerName="nova-metadata-log" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.236436 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b456d32-fcc3-4df4-b3cd-0b1ce6c83263" containerName="dnsmasq-dns" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.237742 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.241402 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.241712 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.258385 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.327665 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s8kp\" (UniqueName: \"kubernetes.io/projected/2e04a9c0-9542-49e8-b185-87a9ab267e7a-kube-api-access-4s8kp\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.328110 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-config-data\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.328608 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.328958 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e04a9c0-9542-49e8-b185-87a9ab267e7a-logs\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.329099 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.430712 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s8kp\" (UniqueName: \"kubernetes.io/projected/2e04a9c0-9542-49e8-b185-87a9ab267e7a-kube-api-access-4s8kp\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.430829 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-config-data\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.430861 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.430930 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e04a9c0-9542-49e8-b185-87a9ab267e7a-logs\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.430984 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.431490 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e04a9c0-9542-49e8-b185-87a9ab267e7a-logs\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.434592 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.435091 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.435520 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-config-data\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.448249 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s8kp\" (UniqueName: \"kubernetes.io/projected/2e04a9c0-9542-49e8-b185-87a9ab267e7a-kube-api-access-4s8kp\") pod \"nova-metadata-0\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " pod="openstack/nova-metadata-0" Oct 14 07:09:59 crc kubenswrapper[5058]: I1014 07:09:59.555083 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.195432 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:10:00 crc kubenswrapper[5058]: W1014 07:10:00.195463 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e04a9c0_9542_49e8_b185_87a9ab267e7a.slice/crio-acfe4ed47b6c12938ef0c19a0ab7489749d622e5fcb77515c43da88dff94012d WatchSource:0}: Error finding container acfe4ed47b6c12938ef0c19a0ab7489749d622e5fcb77515c43da88dff94012d: Status 404 returned error can't find the container with id acfe4ed47b6c12938ef0c19a0ab7489749d622e5fcb77515c43da88dff94012d Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.782557 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.804044 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92bd66af-888c-4ff0-9297-11438e4409d1" path="/var/lib/kubelet/pods/92bd66af-888c-4ff0-9297-11438e4409d1/volumes" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.857337 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-combined-ca-bundle\") pod \"59099423-0d1d-46bb-97f7-22fb9099467b\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.857404 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-config-data\") pod \"59099423-0d1d-46bb-97f7-22fb9099467b\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.857469 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt6qp\" (UniqueName: \"kubernetes.io/projected/59099423-0d1d-46bb-97f7-22fb9099467b-kube-api-access-vt6qp\") pod \"59099423-0d1d-46bb-97f7-22fb9099467b\" (UID: \"59099423-0d1d-46bb-97f7-22fb9099467b\") " Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.866084 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59099423-0d1d-46bb-97f7-22fb9099467b-kube-api-access-vt6qp" (OuterVolumeSpecName: "kube-api-access-vt6qp") pod "59099423-0d1d-46bb-97f7-22fb9099467b" (UID: "59099423-0d1d-46bb-97f7-22fb9099467b"). InnerVolumeSpecName "kube-api-access-vt6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.892320 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59099423-0d1d-46bb-97f7-22fb9099467b" (UID: "59099423-0d1d-46bb-97f7-22fb9099467b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.896660 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-config-data" (OuterVolumeSpecName: "config-data") pod "59099423-0d1d-46bb-97f7-22fb9099467b" (UID: "59099423-0d1d-46bb-97f7-22fb9099467b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.903499 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e04a9c0-9542-49e8-b185-87a9ab267e7a","Type":"ContainerStarted","Data":"13b2f201f21fe9a64c31bfba7e902dfeaa6156cf92a7a9d10dc0f0276012bbec"} Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.903549 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e04a9c0-9542-49e8-b185-87a9ab267e7a","Type":"ContainerStarted","Data":"acfe4ed47b6c12938ef0c19a0ab7489749d622e5fcb77515c43da88dff94012d"} Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.904968 5058 generic.go:334] "Generic (PLEG): container finished" podID="59099423-0d1d-46bb-97f7-22fb9099467b" containerID="4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746" exitCode=0 Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.905001 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59099423-0d1d-46bb-97f7-22fb9099467b","Type":"ContainerDied","Data":"4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746"} Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.905021 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59099423-0d1d-46bb-97f7-22fb9099467b","Type":"ContainerDied","Data":"1716a3964d60f1231e8c5314784b187dd4b348ad775e762a0d0dd9bc75d81b6e"} Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.905040 5058 scope.go:117] "RemoveContainer" containerID="4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.905423 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.948526 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.965680 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt6qp\" (UniqueName: \"kubernetes.io/projected/59099423-0d1d-46bb-97f7-22fb9099467b-kube-api-access-vt6qp\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.965718 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.965731 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59099423-0d1d-46bb-97f7-22fb9099467b-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.971354 5058 scope.go:117] "RemoveContainer" containerID="4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746" Oct 14 07:10:00 crc kubenswrapper[5058]: E1014 07:10:00.973355 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746\": container with ID starting with 4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746 not found: ID does not exist" containerID="4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.973402 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.973403 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746"} err="failed to get container status \"4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746\": rpc error: code = NotFound desc = could not find container \"4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746\": container with ID starting with 4d68f7bbbabc4cf18d7f9aa3a1cbafaddb5d38bbc26720de403e85d5ecc60746 not found: ID does not exist" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.985993 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:10:00 crc kubenswrapper[5058]: E1014 07:10:00.986501 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59099423-0d1d-46bb-97f7-22fb9099467b" containerName="nova-scheduler-scheduler" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.986527 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="59099423-0d1d-46bb-97f7-22fb9099467b" containerName="nova-scheduler-scheduler" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.986788 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="59099423-0d1d-46bb-97f7-22fb9099467b" containerName="nova-scheduler-scheduler" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.987537 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.993375 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 07:10:00 crc kubenswrapper[5058]: I1014 07:10:00.996044 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.067503 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.067614 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-config-data\") pod \"nova-scheduler-0\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.067724 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnjv4\" (UniqueName: \"kubernetes.io/projected/18689556-e39f-4c5d-add8-aa934c468f1b-kube-api-access-nnjv4\") pod \"nova-scheduler-0\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.169424 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-config-data\") pod \"nova-scheduler-0\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.169526 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnjv4\" (UniqueName: \"kubernetes.io/projected/18689556-e39f-4c5d-add8-aa934c468f1b-kube-api-access-nnjv4\") pod \"nova-scheduler-0\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.169616 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.173566 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-config-data\") pod \"nova-scheduler-0\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.173676 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.188233 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnjv4\" (UniqueName: \"kubernetes.io/projected/18689556-e39f-4c5d-add8-aa934c468f1b-kube-api-access-nnjv4\") pod \"nova-scheduler-0\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.313638 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.806624 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:10:01 crc kubenswrapper[5058]: W1014 07:10:01.816720 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18689556_e39f_4c5d_add8_aa934c468f1b.slice/crio-e86aab12d5fc019486f6a2be9d4c8d9e1c40145036bdcfc86c6a15f78a6f2b1a WatchSource:0}: Error finding container e86aab12d5fc019486f6a2be9d4c8d9e1c40145036bdcfc86c6a15f78a6f2b1a: Status 404 returned error can't find the container with id e86aab12d5fc019486f6a2be9d4c8d9e1c40145036bdcfc86c6a15f78a6f2b1a Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.924162 5058 generic.go:334] "Generic (PLEG): container finished" podID="485aff60-bcba-4aee-818c-de28e26964e2" containerID="e4d7518b85947d26c6171f1eab2950b6d3234d20c8173fbf3e81eb1f4e60fb15" exitCode=0 Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.924323 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"485aff60-bcba-4aee-818c-de28e26964e2","Type":"ContainerDied","Data":"e4d7518b85947d26c6171f1eab2950b6d3234d20c8173fbf3e81eb1f4e60fb15"} Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.924354 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"485aff60-bcba-4aee-818c-de28e26964e2","Type":"ContainerDied","Data":"99a56662eae079fc7c8488cfe51882e12b2af909873bc3d675f999f61a14d3ce"} Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.924369 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99a56662eae079fc7c8488cfe51882e12b2af909873bc3d675f999f61a14d3ce" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.929399 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18689556-e39f-4c5d-add8-aa934c468f1b","Type":"ContainerStarted","Data":"e86aab12d5fc019486f6a2be9d4c8d9e1c40145036bdcfc86c6a15f78a6f2b1a"} Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.931484 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e04a9c0-9542-49e8-b185-87a9ab267e7a","Type":"ContainerStarted","Data":"7e5c7b3f56c44a6ade0810b15f6d0c8395be9268c61c135c78ce10faffa09108"} Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.955505 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.955489906 podStartE2EDuration="2.955489906s" podCreationTimestamp="2025-10-14 07:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:10:01.951948226 +0000 UTC m=+1349.863032042" watchObservedRunningTime="2025-10-14 07:10:01.955489906 +0000 UTC m=+1349.866573712" Oct 14 07:10:01 crc kubenswrapper[5058]: I1014 07:10:01.969037 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.095829 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-config-data\") pod \"485aff60-bcba-4aee-818c-de28e26964e2\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.097187 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-combined-ca-bundle\") pod \"485aff60-bcba-4aee-818c-de28e26964e2\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.097454 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485aff60-bcba-4aee-818c-de28e26964e2-logs\") pod \"485aff60-bcba-4aee-818c-de28e26964e2\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.097665 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-internal-tls-certs\") pod \"485aff60-bcba-4aee-818c-de28e26964e2\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.097886 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485aff60-bcba-4aee-818c-de28e26964e2-logs" (OuterVolumeSpecName: "logs") pod "485aff60-bcba-4aee-818c-de28e26964e2" (UID: "485aff60-bcba-4aee-818c-de28e26964e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.104217 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z42lc\" (UniqueName: \"kubernetes.io/projected/485aff60-bcba-4aee-818c-de28e26964e2-kube-api-access-z42lc\") pod \"485aff60-bcba-4aee-818c-de28e26964e2\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.104760 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-public-tls-certs\") pod \"485aff60-bcba-4aee-818c-de28e26964e2\" (UID: \"485aff60-bcba-4aee-818c-de28e26964e2\") " Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.105586 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485aff60-bcba-4aee-818c-de28e26964e2-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.107475 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485aff60-bcba-4aee-818c-de28e26964e2-kube-api-access-z42lc" (OuterVolumeSpecName: "kube-api-access-z42lc") pod "485aff60-bcba-4aee-818c-de28e26964e2" (UID: "485aff60-bcba-4aee-818c-de28e26964e2"). InnerVolumeSpecName "kube-api-access-z42lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.121580 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "485aff60-bcba-4aee-818c-de28e26964e2" (UID: "485aff60-bcba-4aee-818c-de28e26964e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.122955 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-config-data" (OuterVolumeSpecName: "config-data") pod "485aff60-bcba-4aee-818c-de28e26964e2" (UID: "485aff60-bcba-4aee-818c-de28e26964e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.150071 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "485aff60-bcba-4aee-818c-de28e26964e2" (UID: "485aff60-bcba-4aee-818c-de28e26964e2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.155047 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "485aff60-bcba-4aee-818c-de28e26964e2" (UID: "485aff60-bcba-4aee-818c-de28e26964e2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.207338 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.207455 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.207474 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.207491 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/485aff60-bcba-4aee-818c-de28e26964e2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.207510 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z42lc\" (UniqueName: \"kubernetes.io/projected/485aff60-bcba-4aee-818c-de28e26964e2-kube-api-access-z42lc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.814132 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59099423-0d1d-46bb-97f7-22fb9099467b" path="/var/lib/kubelet/pods/59099423-0d1d-46bb-97f7-22fb9099467b/volumes" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.949886 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18689556-e39f-4c5d-add8-aa934c468f1b","Type":"ContainerStarted","Data":"a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68"} Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.950017 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.978641 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.994643 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:10:02 crc kubenswrapper[5058]: I1014 07:10:02.997963 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.997941518 podStartE2EDuration="2.997941518s" podCreationTimestamp="2025-10-14 07:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:10:02.996749125 +0000 UTC m=+1350.907832941" watchObservedRunningTime="2025-10-14 07:10:02.997941518 +0000 UTC m=+1350.909025324" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.015917 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 07:10:03 crc kubenswrapper[5058]: E1014 07:10:03.016417 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-api" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.016432 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-api" Oct 14 07:10:03 crc kubenswrapper[5058]: E1014 07:10:03.016478 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-log" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.016486 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-log" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.016739 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-api" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.016757 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="485aff60-bcba-4aee-818c-de28e26964e2" containerName="nova-api-log" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.018033 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.020820 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.044841 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.045350 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.045769 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.137863 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e6c9e27-cbe7-4371-8c0c-614755871b4e-logs\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.138142 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q6r7\" (UniqueName: \"kubernetes.io/projected/8e6c9e27-cbe7-4371-8c0c-614755871b4e-kube-api-access-9q6r7\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.138220 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.138325 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-config-data\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.138415 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.138617 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.240967 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q6r7\" (UniqueName: \"kubernetes.io/projected/8e6c9e27-cbe7-4371-8c0c-614755871b4e-kube-api-access-9q6r7\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.241210 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.243008 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-config-data\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.243860 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.243954 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.244085 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e6c9e27-cbe7-4371-8c0c-614755871b4e-logs\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.248662 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.250010 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.250081 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-config-data\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.253551 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e6c9e27-cbe7-4371-8c0c-614755871b4e-logs\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.261179 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q6r7\" (UniqueName: \"kubernetes.io/projected/8e6c9e27-cbe7-4371-8c0c-614755871b4e-kube-api-access-9q6r7\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.275110 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.364587 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.656177 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.656623 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.656685 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.657514 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6ea463aea20323602cc89ea6f1fb50e04214ddd78d3d4678e480df5c73c16fc"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.657592 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://f6ea463aea20323602cc89ea6f1fb50e04214ddd78d3d4678e480df5c73c16fc" gracePeriod=600 Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.839832 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:10:03 crc kubenswrapper[5058]: W1014 07:10:03.848089 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6c9e27_cbe7_4371_8c0c_614755871b4e.slice/crio-6979b8b041ed154edf6df00e2cda097cf01017bcca95937bbfbabbcfd657c554 WatchSource:0}: Error finding container 6979b8b041ed154edf6df00e2cda097cf01017bcca95937bbfbabbcfd657c554: Status 404 returned error can't find the container with id 6979b8b041ed154edf6df00e2cda097cf01017bcca95937bbfbabbcfd657c554 Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.972309 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="f6ea463aea20323602cc89ea6f1fb50e04214ddd78d3d4678e480df5c73c16fc" exitCode=0 Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.972419 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"f6ea463aea20323602cc89ea6f1fb50e04214ddd78d3d4678e480df5c73c16fc"} Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.972471 5058 scope.go:117] "RemoveContainer" containerID="11c4b957e32f21626508faf45924a31d54a73d0ce8b7652f6bc10e3c25dd0778" Oct 14 07:10:03 crc kubenswrapper[5058]: I1014 07:10:03.974345 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e6c9e27-cbe7-4371-8c0c-614755871b4e","Type":"ContainerStarted","Data":"6979b8b041ed154edf6df00e2cda097cf01017bcca95937bbfbabbcfd657c554"} Oct 14 07:10:04 crc kubenswrapper[5058]: I1014 07:10:04.556195 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 07:10:04 crc kubenswrapper[5058]: I1014 07:10:04.556619 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 07:10:04 crc kubenswrapper[5058]: I1014 07:10:04.807439 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485aff60-bcba-4aee-818c-de28e26964e2" path="/var/lib/kubelet/pods/485aff60-bcba-4aee-818c-de28e26964e2/volumes" Oct 14 07:10:04 crc kubenswrapper[5058]: I1014 07:10:04.989538 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e6c9e27-cbe7-4371-8c0c-614755871b4e","Type":"ContainerStarted","Data":"bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf"} Oct 14 07:10:04 crc kubenswrapper[5058]: I1014 07:10:04.989607 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e6c9e27-cbe7-4371-8c0c-614755871b4e","Type":"ContainerStarted","Data":"90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b"} Oct 14 07:10:05 crc kubenswrapper[5058]: I1014 07:10:04.997692 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5"} Oct 14 07:10:05 crc kubenswrapper[5058]: I1014 07:10:05.014313 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.014289594 podStartE2EDuration="3.014289594s" podCreationTimestamp="2025-10-14 07:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:10:05.012139923 +0000 UTC m=+1352.923223739" watchObservedRunningTime="2025-10-14 07:10:05.014289594 +0000 UTC m=+1352.925373420" Oct 14 07:10:06 crc kubenswrapper[5058]: I1014 07:10:06.314081 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 07:10:09 crc kubenswrapper[5058]: I1014 07:10:09.555698 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 07:10:09 crc kubenswrapper[5058]: I1014 07:10:09.556437 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 07:10:10 crc kubenswrapper[5058]: I1014 07:10:10.571966 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 07:10:10 crc kubenswrapper[5058]: I1014 07:10:10.572003 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 07:10:11 crc kubenswrapper[5058]: I1014 07:10:11.314570 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 07:10:11 crc kubenswrapper[5058]: I1014 07:10:11.348428 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 07:10:12 crc kubenswrapper[5058]: I1014 07:10:12.011462 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 07:10:12 crc kubenswrapper[5058]: I1014 07:10:12.131167 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 07:10:13 crc kubenswrapper[5058]: I1014 07:10:13.366414 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 07:10:13 crc kubenswrapper[5058]: I1014 07:10:13.366743 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 07:10:14 crc kubenswrapper[5058]: I1014 07:10:14.379991 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 07:10:14 crc kubenswrapper[5058]: I1014 07:10:14.380437 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 07:10:19 crc kubenswrapper[5058]: I1014 07:10:19.560068 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 07:10:19 crc kubenswrapper[5058]: I1014 07:10:19.562054 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 07:10:19 crc kubenswrapper[5058]: I1014 07:10:19.569664 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 07:10:20 crc kubenswrapper[5058]: I1014 07:10:20.224005 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 07:10:23 crc kubenswrapper[5058]: I1014 07:10:23.377274 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 07:10:23 crc kubenswrapper[5058]: I1014 07:10:23.379515 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 07:10:23 crc kubenswrapper[5058]: I1014 07:10:23.387875 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 07:10:23 crc kubenswrapper[5058]: I1014 07:10:23.393361 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 07:10:24 crc kubenswrapper[5058]: I1014 07:10:24.266329 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 07:10:24 crc kubenswrapper[5058]: I1014 07:10:24.281266 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.235871 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.236521 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="95fc1e78-622a-4acc-865a-0f9e95bac6b3" containerName="openstackclient" containerID="cri-o://ccd63adf9ff2533f48ef01689c4ae84b7ad678fd3261c983558f6e60e740f9b0" gracePeriod=2 Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.274541 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.310356 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance3d16-account-delete-zwlpf"] Oct 14 07:10:46 crc kubenswrapper[5058]: E1014 07:10:46.310785 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fc1e78-622a-4acc-865a-0f9e95bac6b3" containerName="openstackclient" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.310814 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fc1e78-622a-4acc-865a-0f9e95bac6b3" containerName="openstackclient" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.311035 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fc1e78-622a-4acc-865a-0f9e95bac6b3" containerName="openstackclient" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.311832 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance3d16-account-delete-zwlpf" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.345691 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance3d16-account-delete-zwlpf"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.421419 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4gftc"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.436620 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-4x26x"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.437041 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-4x26x" podUID="fda633cb-5e3d-42da-b21e-be6d5a984f2f" containerName="openstack-network-exporter" containerID="cri-o://dec221ebaaab946f5f28317e8fedd374c12dcc709014570c2b792f667fb91617" gracePeriod=30 Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.517811 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-wskz9"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.596761 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder0800-account-delete-rglnm"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.598169 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder0800-account-delete-rglnm" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.611561 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9795\" (UniqueName: \"kubernetes.io/projected/1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409-kube-api-access-f9795\") pod \"glance3d16-account-delete-zwlpf\" (UID: \"1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409\") " pod="openstack/glance3d16-account-delete-zwlpf" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.669893 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder0800-account-delete-rglnm"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.689211 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-f2gf4"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.708603 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-f2gf4"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.717869 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9795\" (UniqueName: \"kubernetes.io/projected/1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409-kube-api-access-f9795\") pod \"glance3d16-account-delete-zwlpf\" (UID: \"1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409\") " pod="openstack/glance3d16-account-delete-zwlpf" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.718181 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jd7h\" (UniqueName: \"kubernetes.io/projected/9a92fd66-33d0-4026-9f89-4fe1b7a57478-kube-api-access-5jd7h\") pod \"cinder0800-account-delete-rglnm\" (UID: \"9a92fd66-33d0-4026-9f89-4fe1b7a57478\") " pod="openstack/cinder0800-account-delete-rglnm" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.718677 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cxv76"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.728889 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cxv76"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.741863 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8n5f9"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.741910 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8n5f9"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.754819 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron9e44-account-delete-z6nhv"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.756423 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron9e44-account-delete-z6nhv" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.758227 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron9e44-account-delete-z6nhv"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.759611 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9795\" (UniqueName: \"kubernetes.io/projected/1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409-kube-api-access-f9795\") pod \"glance3d16-account-delete-zwlpf\" (UID: \"1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409\") " pod="openstack/glance3d16-account-delete-zwlpf" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.768626 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.775888 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbicanb71d-account-delete-xt45s"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.777164 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanb71d-account-delete-xt45s" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.784499 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbicanb71d-account-delete-xt45s"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.824233 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jd7h\" (UniqueName: \"kubernetes.io/projected/9a92fd66-33d0-4026-9f89-4fe1b7a57478-kube-api-access-5jd7h\") pod \"cinder0800-account-delete-rglnm\" (UID: \"9a92fd66-33d0-4026-9f89-4fe1b7a57478\") " pod="openstack/cinder0800-account-delete-rglnm" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.824612 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49q6m\" (UniqueName: \"kubernetes.io/projected/b8fe9fcd-194a-45e1-b1c3-08bdd5331810-kube-api-access-49q6m\") pod \"neutron9e44-account-delete-z6nhv\" (UID: \"b8fe9fcd-194a-45e1-b1c3-08bdd5331810\") " pod="openstack/neutron9e44-account-delete-z6nhv" Oct 14 07:10:46 crc kubenswrapper[5058]: E1014 07:10:46.824973 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:46 crc kubenswrapper[5058]: E1014 07:10:46.825030 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data podName:b753342c-4a7e-4bf6-809a-3c5bc083ba6a nodeName:}" failed. No retries permitted until 2025-10-14 07:10:47.325014087 +0000 UTC m=+1395.236097893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data") pod "rabbitmq-cell1-server-0" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a") : configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.849357 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c0ea98-8f40-4281-b309-4c3994b849ba" path="/var/lib/kubelet/pods/15c0ea98-8f40-4281-b309-4c3994b849ba/volumes" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.851070 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3232b896-82b9-4d80-b74a-2e860e05ffbc" path="/var/lib/kubelet/pods/3232b896-82b9-4d80-b74a-2e860e05ffbc/volumes" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.851998 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e6ff86-f483-424b-9f6d-5f6d0c1e81b0" path="/var/lib/kubelet/pods/74e6ff86-f483-424b-9f6d-5f6d0c1e81b0/volumes" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.852597 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7fblc"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.852653 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7fblc"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.852775 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jd7h\" (UniqueName: \"kubernetes.io/projected/9a92fd66-33d0-4026-9f89-4fe1b7a57478-kube-api-access-5jd7h\") pod \"cinder0800-account-delete-rglnm\" (UID: \"9a92fd66-33d0-4026-9f89-4fe1b7a57478\") " pod="openstack/cinder0800-account-delete-rglnm" Oct 14 07:10:46 crc kubenswrapper[5058]: E1014 07:10:46.914663 5058 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-4gftc" message=< Oct 14 07:10:46 crc kubenswrapper[5058]: Exiting ovn-controller (1) [ OK ] Oct 14 07:10:46 crc kubenswrapper[5058]: > Oct 14 07:10:46 crc kubenswrapper[5058]: E1014 07:10:46.914700 5058 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-4gftc" podUID="6faed14d-d25e-43b8-96db-c64b6b3feece" containerName="ovn-controller" containerID="cri-o://3ed36db0ed54ce5c7569a1e901ed5f51357aac4b7b3444aec8115e75aced97c2" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.914730 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-4gftc" podUID="6faed14d-d25e-43b8-96db-c64b6b3feece" containerName="ovn-controller" containerID="cri-o://3ed36db0ed54ce5c7569a1e901ed5f51357aac4b7b3444aec8115e75aced97c2" gracePeriod=30 Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.926493 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcl98\" (UniqueName: \"kubernetes.io/projected/df614c68-4293-4c6c-a18e-e87dfcf8fe26-kube-api-access-kcl98\") pod \"barbicanb71d-account-delete-xt45s\" (UID: \"df614c68-4293-4c6c-a18e-e87dfcf8fe26\") " pod="openstack/barbicanb71d-account-delete-xt45s" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.926538 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49q6m\" (UniqueName: \"kubernetes.io/projected/b8fe9fcd-194a-45e1-b1c3-08bdd5331810-kube-api-access-49q6m\") pod \"neutron9e44-account-delete-z6nhv\" (UID: \"b8fe9fcd-194a-45e1-b1c3-08bdd5331810\") " pod="openstack/neutron9e44-account-delete-z6nhv" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.945823 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder0800-account-delete-rglnm" Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.946378 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.946595 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="ovn-northd" containerID="cri-o://3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" gracePeriod=30 Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.946978 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="openstack-network-exporter" containerID="cri-o://33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d" gracePeriod=30 Oct 14 07:10:46 crc kubenswrapper[5058]: I1014 07:10:46.954016 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49q6m\" (UniqueName: \"kubernetes.io/projected/b8fe9fcd-194a-45e1-b1c3-08bdd5331810-kube-api-access-49q6m\") pod \"neutron9e44-account-delete-z6nhv\" (UID: \"b8fe9fcd-194a-45e1-b1c3-08bdd5331810\") " pod="openstack/neutron9e44-account-delete-z6nhv" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.000758 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance3d16-account-delete-zwlpf" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.032482 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcl98\" (UniqueName: \"kubernetes.io/projected/df614c68-4293-4c6c-a18e-e87dfcf8fe26-kube-api-access-kcl98\") pod \"barbicanb71d-account-delete-xt45s\" (UID: \"df614c68-4293-4c6c-a18e-e87dfcf8fe26\") " pod="openstack/barbicanb71d-account-delete-xt45s" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.049582 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.050014 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerName="openstack-network-exporter" containerID="cri-o://a198fefec8d144e30064366b09f01772a3c0cf42d9352708a7015336d4c0e2d0" gracePeriod=300 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.072761 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcl98\" (UniqueName: \"kubernetes.io/projected/df614c68-4293-4c6c-a18e-e87dfcf8fe26-kube-api-access-kcl98\") pod \"barbicanb71d-account-delete-xt45s\" (UID: \"df614c68-4293-4c6c-a18e-e87dfcf8fe26\") " pod="openstack/barbicanb71d-account-delete-xt45s" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.174075 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jkqz7"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.178331 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerName="ovsdbserver-sb" containerID="cri-o://754811d971b5ee596d7950d9970b2744d554f9a03ec5093c8a2ff92a72eb0be3" gracePeriod=300 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.201280 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jkqz7"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.219786 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron9e44-account-delete-z6nhv" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.227604 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanb71d-account-delete-xt45s" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.234998 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.277934 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.278895 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerName="openstack-network-exporter" containerID="cri-o://49e8270533c6a8d714a36b9d0639d16b7a4f6bbe36fd6e76e2691778fef147d6" gracePeriod=300 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.311856 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.312123 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="78ebf4e1-4257-4a45-a776-734a016d955f" containerName="cinder-scheduler" containerID="cri-o://3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.312591 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="78ebf4e1-4257-4a45-a776-734a016d955f" containerName="probe" containerID="cri-o://968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.339357 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8d77dbf7-m7vgh"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.339619 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8d77dbf7-m7vgh" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerName="neutron-api" containerID="cri-o://65ea6b846599e21eb16e6adb81ef1c3d6526e566be9b38e51f208cad25bb0ef5" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.339903 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8d77dbf7-m7vgh" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerName="neutron-httpd" containerID="cri-o://cd295ee61f567f1a94e09d2c0e675720e548cfe9f3dbd734af3e07368dc83f14" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.339624 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-glrbj"] Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.343014 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.343067 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data podName:b753342c-4a7e-4bf6-809a-3c5bc083ba6a nodeName:}" failed. No retries permitted until 2025-10-14 07:10:48.343054164 +0000 UTC m=+1396.254137970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data") pod "rabbitmq-cell1-server-0" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a") : configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.343916 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.343943 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data podName:59f969a6-6fea-40c8-9254-284205f5b3ea nodeName:}" failed. No retries permitted until 2025-10-14 07:10:47.843934929 +0000 UTC m=+1395.755018735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data") pod "rabbitmq-server-0" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea") : configmap "rabbitmq-config-data" not found Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.358297 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-glrbj"] Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.368178 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.369164 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.370287 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.370315 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="ovn-northd" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.379492 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4xllk"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.395626 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4xllk"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.426175 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerName="ovsdbserver-nb" containerID="cri-o://ddd9d9608be750d69855a085c54444295e513581feb904e658f9daaf62f4edf6" gracePeriod=300 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.592896 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7kd74"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.632435 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-7kd74"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.643666 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.644077 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="61117082-2145-4117-9de9-d283039a6d7d" containerName="glance-log" containerID="cri-o://3dd806326c4ebc88f937e535e0184dd39275bf7daaa1cdd5ccba019323c49240" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.645240 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="61117082-2145-4117-9de9-d283039a6d7d" containerName="glance-httpd" containerID="cri-o://ddf55f3c2bbe35fdc2ab51be906a947fa2e2fdff71a5c9edee6623ab01efa4ca" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.646107 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" containerID="cri-o://a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" gracePeriod=29 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.684941 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.685316 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerName="cinder-api-log" containerID="cri-o://75cd20c0073237cb2ae10b086a89b60f8da343d852dbb0faa5aaa253f9871fe4" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.685470 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerName="cinder-api" containerID="cri-o://f5d187c11c0f467f10e4343a5baa9bc8683d7c0ce7601c7a7a218b64e989e1ed" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.689378 5058 generic.go:334] "Generic (PLEG): container finished" podID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerID="33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d" exitCode=2 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.689603 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466","Type":"ContainerDied","Data":"33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d"} Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.732572 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_74d9700c-c7fa-4020-939c-ce42c1b3afe8/ovsdbserver-nb/0.log" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.732617 5058 generic.go:334] "Generic (PLEG): container finished" podID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerID="49e8270533c6a8d714a36b9d0639d16b7a4f6bbe36fd6e76e2691778fef147d6" exitCode=2 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.732635 5058 generic.go:334] "Generic (PLEG): container finished" podID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerID="ddd9d9608be750d69855a085c54444295e513581feb904e658f9daaf62f4edf6" exitCode=143 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.732720 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.732746 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"74d9700c-c7fa-4020-939c-ce42c1b3afe8","Type":"ContainerDied","Data":"49e8270533c6a8d714a36b9d0639d16b7a4f6bbe36fd6e76e2691778fef147d6"} Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.732762 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"74d9700c-c7fa-4020-939c-ce42c1b3afe8","Type":"ContainerDied","Data":"ddd9d9608be750d69855a085c54444295e513581feb904e658f9daaf62f4edf6"} Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.733140 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-server" containerID="cri-o://06b49fc229b22caf4d176c8d244f32c702aa60ee0c1bc3f0ee68302b65265d19" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.733992 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-updater" containerID="cri-o://96660315f94d74277f9b50658993503a1b43f9bef9a95f56461f122ccec07ad5" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734003 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-server" containerID="cri-o://343434d65ccc14910a7fcbead2fb88c5672e280565333bc8a1847ece2eded17c" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734035 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-auditor" containerID="cri-o://537911dbd262660ec60f6e9b0405d1a9f83ab10683dad7e3074f39dc6c1a11e4" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734069 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-replicator" containerID="cri-o://431d79aac23d72bdbe76355998d8a1a48b5be7faa4f9c5fcb39335a05f1f4018" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734101 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-server" containerID="cri-o://1597a4004ccd7362f11728ae0a39556e442ff02bd29578ce9c6c13e045eb6f2a" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734132 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-reaper" containerID="cri-o://54425f7ca236aef845e21698b62205fe14abb46d0ffcc6865557173aab0f934a" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734165 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-auditor" containerID="cri-o://a3e3288d52021b1c5dee76e985c4bf2ef23c6520d23e5ef4eb02fe977bc949b4" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734197 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-replicator" containerID="cri-o://6d0292a646c2aabd8315d14682a37635c0746d89b568115520d8211b6b5b0c40" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734343 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-expirer" containerID="cri-o://3f71ae0f3608a3c488d1248099617a68fad24788a23a6eb403537e7c705bc211" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734384 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="swift-recon-cron" containerID="cri-o://f3d2c8c9184a355d31a5639a1537fe51d6e252644fd1b6581a47121a0d587325" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734413 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-updater" containerID="cri-o://33d2eddcb88d038d345ef3bf102f653431e0f85ff697aaaf69ad45de80bb0ebc" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734417 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="rsync" containerID="cri-o://f29643cf82745f7e73e78c4135a5914e8cf5724cd7128107573a6ee8be46e9b0" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734466 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-auditor" containerID="cri-o://509b41a491460621f4e48c030ec3096e244144015e697f031ef0d4a24b2efb95" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.734507 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-replicator" containerID="cri-o://6e399dec260edbe3841b098d9b9f1bf0dfe804d3a5aa01facc0ec4b2d640b130" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.761784 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.762902 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerName="glance-log" containerID="cri-o://a1c6b82cefdf4862cb066b04eb626d0455c0c4fe043784013c88eec253145cfa" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.764340 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerName="glance-httpd" containerID="cri-o://6f88404a55530828d3f110f54a67b95cc6d0025e7691e4c253f22716380239e4" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.765361 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4x26x_fda633cb-5e3d-42da-b21e-be6d5a984f2f/openstack-network-exporter/0.log" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.765424 5058 generic.go:334] "Generic (PLEG): container finished" podID="fda633cb-5e3d-42da-b21e-be6d5a984f2f" containerID="dec221ebaaab946f5f28317e8fedd374c12dcc709014570c2b792f667fb91617" exitCode=2 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.765477 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4x26x" event={"ID":"fda633cb-5e3d-42da-b21e-be6d5a984f2f","Type":"ContainerDied","Data":"dec221ebaaab946f5f28317e8fedd374c12dcc709014570c2b792f667fb91617"} Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.775623 5058 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Oct 14 07:10:47 crc kubenswrapper[5058]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 14 07:10:47 crc kubenswrapper[5058]: + source /usr/local/bin/container-scripts/functions Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNBridge=br-int Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNRemote=tcp:localhost:6642 Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNEncapType=geneve Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNAvailabilityZones= Oct 14 07:10:47 crc kubenswrapper[5058]: ++ EnableChassisAsGateway=true Oct 14 07:10:47 crc kubenswrapper[5058]: ++ PhysicalNetworks= Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNHostName= Oct 14 07:10:47 crc kubenswrapper[5058]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 14 07:10:47 crc kubenswrapper[5058]: ++ ovs_dir=/var/lib/openvswitch Oct 14 07:10:47 crc kubenswrapper[5058]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 14 07:10:47 crc kubenswrapper[5058]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 14 07:10:47 crc kubenswrapper[5058]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 14 07:10:47 crc kubenswrapper[5058]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 14 07:10:47 crc kubenswrapper[5058]: + sleep 0.5 Oct 14 07:10:47 crc kubenswrapper[5058]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 14 07:10:47 crc kubenswrapper[5058]: + sleep 0.5 Oct 14 07:10:47 crc kubenswrapper[5058]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 14 07:10:47 crc kubenswrapper[5058]: + cleanup_ovsdb_server_semaphore Oct 14 07:10:47 crc kubenswrapper[5058]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 14 07:10:47 crc kubenswrapper[5058]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 14 07:10:47 crc kubenswrapper[5058]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-wskz9" message=< Oct 14 07:10:47 crc kubenswrapper[5058]: Exiting ovsdb-server (5) [ OK ] Oct 14 07:10:47 crc kubenswrapper[5058]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 14 07:10:47 crc kubenswrapper[5058]: + source /usr/local/bin/container-scripts/functions Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNBridge=br-int Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNRemote=tcp:localhost:6642 Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNEncapType=geneve Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNAvailabilityZones= Oct 14 07:10:47 crc kubenswrapper[5058]: ++ EnableChassisAsGateway=true Oct 14 07:10:47 crc kubenswrapper[5058]: ++ PhysicalNetworks= Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNHostName= Oct 14 07:10:47 crc kubenswrapper[5058]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 14 07:10:47 crc kubenswrapper[5058]: ++ ovs_dir=/var/lib/openvswitch Oct 14 07:10:47 crc kubenswrapper[5058]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 14 07:10:47 crc kubenswrapper[5058]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 14 07:10:47 crc kubenswrapper[5058]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 14 07:10:47 crc kubenswrapper[5058]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 14 07:10:47 crc kubenswrapper[5058]: + sleep 0.5 Oct 14 07:10:47 crc kubenswrapper[5058]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 14 07:10:47 crc kubenswrapper[5058]: + sleep 0.5 Oct 14 07:10:47 crc kubenswrapper[5058]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 14 07:10:47 crc kubenswrapper[5058]: + cleanup_ovsdb_server_semaphore Oct 14 07:10:47 crc kubenswrapper[5058]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 14 07:10:47 crc kubenswrapper[5058]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 14 07:10:47 crc kubenswrapper[5058]: > Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.775679 5058 kuberuntime_container.go:691] "PreStop hook failed" err=< Oct 14 07:10:47 crc kubenswrapper[5058]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 14 07:10:47 crc kubenswrapper[5058]: + source /usr/local/bin/container-scripts/functions Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNBridge=br-int Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNRemote=tcp:localhost:6642 Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNEncapType=geneve Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNAvailabilityZones= Oct 14 07:10:47 crc kubenswrapper[5058]: ++ EnableChassisAsGateway=true Oct 14 07:10:47 crc kubenswrapper[5058]: ++ PhysicalNetworks= Oct 14 07:10:47 crc kubenswrapper[5058]: ++ OVNHostName= Oct 14 07:10:47 crc kubenswrapper[5058]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 14 07:10:47 crc kubenswrapper[5058]: ++ ovs_dir=/var/lib/openvswitch Oct 14 07:10:47 crc kubenswrapper[5058]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 14 07:10:47 crc kubenswrapper[5058]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 14 07:10:47 crc kubenswrapper[5058]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 14 07:10:47 crc kubenswrapper[5058]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 14 07:10:47 crc kubenswrapper[5058]: + sleep 0.5 Oct 14 07:10:47 crc kubenswrapper[5058]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 14 07:10:47 crc kubenswrapper[5058]: + sleep 0.5 Oct 14 07:10:47 crc kubenswrapper[5058]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 14 07:10:47 crc kubenswrapper[5058]: + cleanup_ovsdb_server_semaphore Oct 14 07:10:47 crc kubenswrapper[5058]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 14 07:10:47 crc kubenswrapper[5058]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 14 07:10:47 crc kubenswrapper[5058]: > pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" containerID="cri-o://3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.775706 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" containerID="cri-o://3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" gracePeriod=29 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.801292 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a5a36ee5-eb19-4390-a07e-3e22a191ad58/ovsdbserver-sb/0.log" Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.801587 5058 generic.go:334] "Generic (PLEG): container finished" podID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerID="a198fefec8d144e30064366b09f01772a3c0cf42d9352708a7015336d4c0e2d0" exitCode=2 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.801603 5058 generic.go:334] "Generic (PLEG): container finished" podID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerID="754811d971b5ee596d7950d9970b2744d554f9a03ec5093c8a2ff92a72eb0be3" exitCode=143 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.801669 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a5a36ee5-eb19-4390-a07e-3e22a191ad58","Type":"ContainerDied","Data":"a198fefec8d144e30064366b09f01772a3c0cf42d9352708a7015336d4c0e2d0"} Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.801693 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a5a36ee5-eb19-4390-a07e-3e22a191ad58","Type":"ContainerDied","Data":"754811d971b5ee596d7950d9970b2744d554f9a03ec5093c8a2ff92a72eb0be3"} Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.812328 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65bf758599-p6h9m"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.812546 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" podUID="c560dc37-2c84-4468-800c-d90d8f8c158e" containerName="dnsmasq-dns" containerID="cri-o://2182c2e8effad0274964ae00b87d07ae2b2c94468e266f8b6506ca45b077abb0" gracePeriod=10 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.827017 5058 generic.go:334] "Generic (PLEG): container finished" podID="6faed14d-d25e-43b8-96db-c64b6b3feece" containerID="3ed36db0ed54ce5c7569a1e901ed5f51357aac4b7b3444aec8115e75aced97c2" exitCode=0 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.827128 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc" event={"ID":"6faed14d-d25e-43b8-96db-c64b6b3feece","Type":"ContainerDied","Data":"3ed36db0ed54ce5c7569a1e901ed5f51357aac4b7b3444aec8115e75aced97c2"} Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.839494 5058 generic.go:334] "Generic (PLEG): container finished" podID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerID="cd295ee61f567f1a94e09d2c0e675720e548cfe9f3dbd734af3e07368dc83f14" exitCode=0 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.839536 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8d77dbf7-m7vgh" event={"ID":"b7c83841-4ff0-41a0-be22-72af1a0f2bef","Type":"ContainerDied","Data":"cd295ee61f567f1a94e09d2c0e675720e548cfe9f3dbd734af3e07368dc83f14"} Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.888664 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 14 07:10:47 crc kubenswrapper[5058]: E1014 07:10:47.888730 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data podName:59f969a6-6fea-40c8-9254-284205f5b3ea nodeName:}" failed. No retries permitted until 2025-10-14 07:10:48.88871242 +0000 UTC m=+1396.799796226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data") pod "rabbitmq-server-0" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea") : configmap "rabbitmq-config-data" not found Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.959364 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c888fb598-6htl5"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.959616 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c888fb598-6htl5" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-log" containerID="cri-o://6445c8e5022085e59b4cf30be5b9f660e7e4d84d81a5dc48c1a2c637c48a4414" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.960035 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c888fb598-6htl5" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-api" containerID="cri-o://3f22606015530d5cd831d677840b6de4c02774150161b7a98b4b65f28323dbc7" gracePeriod=30 Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.991877 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 07:10:47 crc kubenswrapper[5058]: I1014 07:10:47.996889 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fjrhc"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.010349 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4x26x_fda633cb-5e3d-42da-b21e-be6d5a984f2f/openstack-network-exporter/0.log" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.010420 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.012301 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fjrhc"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.026967 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3d16-account-create-9vjkw"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.058891 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance3d16-account-delete-zwlpf"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.058948 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3d16-account-create-9vjkw"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.065891 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-202a-account-create-4v5vg"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.074038 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-202a-account-create-4v5vg"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.079670 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.080337 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" podUID="c560dc37-2c84-4468-800c-d90d8f8c158e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: connect: connection refused" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.093853 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xdsgk"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.115862 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xdsgk"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.133300 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcs8x\" (UniqueName: \"kubernetes.io/projected/fda633cb-5e3d-42da-b21e-be6d5a984f2f-kube-api-access-mcs8x\") pod \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.133541 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovs-rundir\") pod \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.133645 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda633cb-5e3d-42da-b21e-be6d5a984f2f-config\") pod \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.133702 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovn-rundir\") pod \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.133836 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-metrics-certs-tls-certs\") pod \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.134141 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-combined-ca-bundle\") pod \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\" (UID: \"fda633cb-5e3d-42da-b21e-be6d5a984f2f\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.134559 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "fda633cb-5e3d-42da-b21e-be6d5a984f2f" (UID: "fda633cb-5e3d-42da-b21e-be6d5a984f2f"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.134647 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "fda633cb-5e3d-42da-b21e-be6d5a984f2f" (UID: "fda633cb-5e3d-42da-b21e-be6d5a984f2f"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.135048 5058 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovs-rundir\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.135138 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fda633cb-5e3d-42da-b21e-be6d5a984f2f-ovn-rundir\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.136073 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda633cb-5e3d-42da-b21e-be6d5a984f2f-config" (OuterVolumeSpecName: "config") pod "fda633cb-5e3d-42da-b21e-be6d5a984f2f" (UID: "fda633cb-5e3d-42da-b21e-be6d5a984f2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.159675 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.159977 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-log" containerID="cri-o://90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.161445 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-api" containerID="cri-o://bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.163030 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda633cb-5e3d-42da-b21e-be6d5a984f2f-kube-api-access-mcs8x" (OuterVolumeSpecName: "kube-api-access-mcs8x") pod "fda633cb-5e3d-42da-b21e-be6d5a984f2f" (UID: "fda633cb-5e3d-42da-b21e-be6d5a984f2f"). InnerVolumeSpecName "kube-api-access-mcs8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.209717 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder0800-account-delete-rglnm"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.231951 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ds9g9"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.245751 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run-ovn\") pod \"6faed14d-d25e-43b8-96db-c64b6b3feece\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.245863 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-ovn-controller-tls-certs\") pod \"6faed14d-d25e-43b8-96db-c64b6b3feece\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.245889 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6faed14d-d25e-43b8-96db-c64b6b3feece-scripts\") pod \"6faed14d-d25e-43b8-96db-c64b6b3feece\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.245921 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7chvv\" (UniqueName: \"kubernetes.io/projected/6faed14d-d25e-43b8-96db-c64b6b3feece-kube-api-access-7chvv\") pod \"6faed14d-d25e-43b8-96db-c64b6b3feece\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.245948 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-log-ovn\") pod \"6faed14d-d25e-43b8-96db-c64b6b3feece\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.245977 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run\") pod \"6faed14d-d25e-43b8-96db-c64b6b3feece\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.246094 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-combined-ca-bundle\") pod \"6faed14d-d25e-43b8-96db-c64b6b3feece\" (UID: \"6faed14d-d25e-43b8-96db-c64b6b3feece\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.246660 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcs8x\" (UniqueName: \"kubernetes.io/projected/fda633cb-5e3d-42da-b21e-be6d5a984f2f-kube-api-access-mcs8x\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.246671 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda633cb-5e3d-42da-b21e-be6d5a984f2f-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.248386 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6faed14d-d25e-43b8-96db-c64b6b3feece" (UID: "6faed14d-d25e-43b8-96db-c64b6b3feece"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.252101 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6faed14d-d25e-43b8-96db-c64b6b3feece" (UID: "6faed14d-d25e-43b8-96db-c64b6b3feece"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.252156 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run" (OuterVolumeSpecName: "var-run") pod "6faed14d-d25e-43b8-96db-c64b6b3feece" (UID: "6faed14d-d25e-43b8-96db-c64b6b3feece"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.252464 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6faed14d-d25e-43b8-96db-c64b6b3feece-scripts" (OuterVolumeSpecName: "scripts") pod "6faed14d-d25e-43b8-96db-c64b6b3feece" (UID: "6faed14d-d25e-43b8-96db-c64b6b3feece"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.278233 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6faed14d-d25e-43b8-96db-c64b6b3feece-kube-api-access-7chvv" (OuterVolumeSpecName: "kube-api-access-7chvv") pod "6faed14d-d25e-43b8-96db-c64b6b3feece" (UID: "6faed14d-d25e-43b8-96db-c64b6b3feece"). InnerVolumeSpecName "kube-api-access-7chvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: W1014 07:10:48.278533 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bc107f0_eaa6_4aa4_b5ea_05bcc76f0409.slice/crio-30ac7af819906fa5ff8bff2ceaf9f49f1bc0e4983d612d2e827bb46d71cd6df0 WatchSource:0}: Error finding container 30ac7af819906fa5ff8bff2ceaf9f49f1bc0e4983d612d2e827bb46d71cd6df0: Status 404 returned error can't find the container with id 30ac7af819906fa5ff8bff2ceaf9f49f1bc0e4983d612d2e827bb46d71cd6df0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.290151 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0800-account-create-k7fx9"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.291040 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fda633cb-5e3d-42da-b21e-be6d5a984f2f" (UID: "fda633cb-5e3d-42da-b21e-be6d5a984f2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.331105 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder0800-account-delete-rglnm"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.353055 5058 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.353083 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6faed14d-d25e-43b8-96db-c64b6b3feece-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.353095 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7chvv\" (UniqueName: \"kubernetes.io/projected/6faed14d-d25e-43b8-96db-c64b6b3feece-kube-api-access-7chvv\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.353107 5058 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.353119 5058 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6faed14d-d25e-43b8-96db-c64b6b3feece-var-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.353132 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: E1014 07:10:48.353385 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:48 crc kubenswrapper[5058]: E1014 07:10:48.353506 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data podName:b753342c-4a7e-4bf6-809a-3c5bc083ba6a nodeName:}" failed. No retries permitted until 2025-10-14 07:10:50.353488584 +0000 UTC m=+1398.264572390 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data") pod "rabbitmq-cell1-server-0" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a") : configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.359322 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ds9g9"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.394592 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0800-account-create-k7fx9"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.433072 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qmtsb"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.491644 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_74d9700c-c7fa-4020-939c-ce42c1b3afe8/ovsdbserver-nb/0.log" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.491998 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.508590 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6faed14d-d25e-43b8-96db-c64b6b3feece" (UID: "6faed14d-d25e-43b8-96db-c64b6b3feece"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.549783 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qmtsb"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.567987 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron9e44-account-delete-z6nhv"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.569171 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.574901 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9e44-account-create-7nwrj"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.578191 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="133d4cdf-58ed-4544-8f05-328587a2b701" containerName="galera" containerID="cri-o://9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.578308 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-r4bgw"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.586845 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9e44-account-create-7nwrj"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.592987 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-r4bgw"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.603951 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "fda633cb-5e3d-42da-b21e-be6d5a984f2f" (UID: "fda633cb-5e3d-42da-b21e-be6d5a984f2f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.617761 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance3d16-account-delete-zwlpf"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.630109 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d813-account-create-ck5bl"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.639811 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d813-account-create-ck5bl"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.652861 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.653127 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-log" containerID="cri-o://13b2f201f21fe9a64c31bfba7e902dfeaa6156cf92a7a9d10dc0f0276012bbec" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.653543 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-metadata" containerID="cri-o://7e5c7b3f56c44a6ade0810b15f6d0c8395be9268c61c135c78ce10faffa09108" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.662141 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "6faed14d-d25e-43b8-96db-c64b6b3feece" (UID: "6faed14d-d25e-43b8-96db-c64b6b3feece"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.663779 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rtt7h"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.672299 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jrsz\" (UniqueName: \"kubernetes.io/projected/74d9700c-c7fa-4020-939c-ce42c1b3afe8-kube-api-access-5jrsz\") pod \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.672350 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdbserver-nb-tls-certs\") pod \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.672372 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.672389 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-scripts\") pod \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.672513 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-metrics-certs-tls-certs\") pod \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.672594 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-config\") pod \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.672677 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-combined-ca-bundle\") pod \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.672701 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdb-rundir\") pod \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\" (UID: \"74d9700c-c7fa-4020-939c-ce42c1b3afe8\") " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.673106 5058 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda633cb-5e3d-42da-b21e-be6d5a984f2f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.673122 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faed14d-d25e-43b8-96db-c64b6b3feece-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.673645 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "74d9700c-c7fa-4020-939c-ce42c1b3afe8" (UID: "74d9700c-c7fa-4020-939c-ce42c1b3afe8"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.673686 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-scripts" (OuterVolumeSpecName: "scripts") pod "74d9700c-c7fa-4020-939c-ce42c1b3afe8" (UID: "74d9700c-c7fa-4020-939c-ce42c1b3afe8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.674419 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rtt7h"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.674446 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-config" (OuterVolumeSpecName: "config") pod "74d9700c-c7fa-4020-939c-ce42c1b3afe8" (UID: "74d9700c-c7fa-4020-939c-ce42c1b3afe8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.682603 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.682956 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="18689556-e39f-4c5d-add8-aa934c468f1b" containerName="nova-scheduler-scheduler" containerID="cri-o://a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.684055 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "74d9700c-c7fa-4020-939c-ce42c1b3afe8" (UID: "74d9700c-c7fa-4020-939c-ce42c1b3afe8"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.690201 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b71d-account-create-krknh"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.696134 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b71d-account-create-krknh"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.702125 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d9700c-c7fa-4020-939c-ce42c1b3afe8-kube-api-access-5jrsz" (OuterVolumeSpecName: "kube-api-access-5jrsz") pod "74d9700c-c7fa-4020-939c-ce42c1b3afe8" (UID: "74d9700c-c7fa-4020-939c-ce42c1b3afe8"). InnerVolumeSpecName "kube-api-access-5jrsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.706219 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicanb71d-account-delete-xt45s"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.714384 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sh9kk"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.722009 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sh9kk"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.729311 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d8bf-account-create-56zf4"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.733357 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-54cccf7b97-zh6kt"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.733675 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-54cccf7b97-zh6kt" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-httpd" containerID="cri-o://144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.733856 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-54cccf7b97-zh6kt" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-server" containerID="cri-o://83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.744245 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d8bf-account-create-56zf4"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.759426 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-hcch5"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.772935 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-aaa5-account-create-6v9dn"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.773115 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74d9700c-c7fa-4020-939c-ce42c1b3afe8" (UID: "74d9700c-c7fa-4020-939c-ce42c1b3afe8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.774741 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.775197 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.775215 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.775227 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jrsz\" (UniqueName: \"kubernetes.io/projected/74d9700c-c7fa-4020-939c-ce42c1b3afe8-kube-api-access-5jrsz\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.775252 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.775265 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74d9700c-c7fa-4020-939c-ce42c1b3afe8-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.779920 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-hcch5"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.786889 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-aaa5-account-create-6v9dn"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.788105 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "74d9700c-c7fa-4020-939c-ce42c1b3afe8" (UID: "74d9700c-c7fa-4020-939c-ce42c1b3afe8"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.809690 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0298834e-f248-4e99-8f2d-7807c9421143" path="/var/lib/kubelet/pods/0298834e-f248-4e99-8f2d-7807c9421143/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.810350 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0599ed33-679e-405e-a658-1abaf1a957aa" path="/var/lib/kubelet/pods/0599ed33-679e-405e-a658-1abaf1a957aa/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.810863 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a76d6d-4463-4f4f-afdc-dde452bd3520" path="/var/lib/kubelet/pods/21a76d6d-4463-4f4f-afdc-dde452bd3520/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.811316 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="220927ac-fb79-4a9d-9b29-d1cd3a48cd4f" path="/var/lib/kubelet/pods/220927ac-fb79-4a9d-9b29-d1cd3a48cd4f/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.812870 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29153d57-226a-44ac-ae33-86127aadf258" path="/var/lib/kubelet/pods/29153d57-226a-44ac-ae33-86127aadf258/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.813351 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a95c82c-56a0-469f-bafc-f7beb19a72ba" path="/var/lib/kubelet/pods/2a95c82c-56a0-469f-bafc-f7beb19a72ba/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.813813 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5df098-55c4-4c1a-8872-883e1f9c3929" path="/var/lib/kubelet/pods/2c5df098-55c4-4c1a-8872-883e1f9c3929/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.814254 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30de89dc-6c60-4277-8669-d4441385cc1f" path="/var/lib/kubelet/pods/30de89dc-6c60-4277-8669-d4441385cc1f/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.815171 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "74d9700c-c7fa-4020-939c-ce42c1b3afe8" (UID: "74d9700c-c7fa-4020-939c-ce42c1b3afe8"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.815447 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a02c87-e97c-437f-8653-a2fd34b32b04" path="/var/lib/kubelet/pods/37a02c87-e97c-437f-8653-a2fd34b32b04/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.816011 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e446e94-abcb-495b-be50-2100e77d2e4e" path="/var/lib/kubelet/pods/3e446e94-abcb-495b-be50-2100e77d2e4e/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.816535 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5111efb2-a6ab-4523-a72d-132bf5d965b5" path="/var/lib/kubelet/pods/5111efb2-a6ab-4523-a72d-132bf5d965b5/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.818699 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70436572-037b-45af-8016-a6e7098110ba" path="/var/lib/kubelet/pods/70436572-037b-45af-8016-a6e7098110ba/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.821636 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4206c6-4aa9-49a8-b6fb-c88828d6eb59" path="/var/lib/kubelet/pods/7a4206c6-4aa9-49a8-b6fb-c88828d6eb59/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.822569 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87390a42-eb96-410d-8bca-f77fb2e3ecf9" path="/var/lib/kubelet/pods/87390a42-eb96-410d-8bca-f77fb2e3ecf9/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.823029 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d4c8ac-b4bc-440a-bb9d-3721475b9fae" path="/var/lib/kubelet/pods/87d4c8ac-b4bc-440a-bb9d-3721475b9fae/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.823590 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88700bea-57f8-4d34-a17f-5e5f85792862" path="/var/lib/kubelet/pods/88700bea-57f8-4d34-a17f-5e5f85792862/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.824465 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f4baf6-bf38-46e3-ac23-e4e7e8083f29" path="/var/lib/kubelet/pods/93f4baf6-bf38-46e3-ac23-e4e7e8083f29/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.824936 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab845f3-015e-4520-b4cf-56394290c253" path="/var/lib/kubelet/pods/bab845f3-015e-4520-b4cf-56394290c253/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.825364 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c456dea6-e279-4d34-b077-0404a2b61fe2" path="/var/lib/kubelet/pods/c456dea6-e279-4d34-b077-0404a2b61fe2/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.826296 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf2714b-6496-498c-8027-b41adcda1a41" path="/var/lib/kubelet/pods/daf2714b-6496-498c-8027-b41adcda1a41/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.826890 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc31907-c72f-4c4d-af0a-45e15e5bcca9" path="/var/lib/kubelet/pods/edc31907-c72f-4c4d-af0a-45e15e5bcca9/volumes" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.827990 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-76589ffb55-8wb6q"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.828196 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-76589ffb55-8wb6q" podUID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerName="barbican-worker-log" containerID="cri-o://d8be8bca483382edada5af0684b9c15ed30003719e5ed4f96a64a29209b1930d" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.828501 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-76589ffb55-8wb6q" podUID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerName="barbican-worker" containerID="cri-o://9b3da1831fb76f980056107a8a4364388de6c4f35dd0467a86ca43e04eafed44" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.833384 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.835767 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.836055 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" podUID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerName="barbican-keystone-listener-log" containerID="cri-o://9ee6ab347aa76c2029fca9b728228a4e733b2d2b1b8c42b5e8bdb632ac615717" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.836146 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" podUID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerName="barbican-keystone-listener" containerID="cri-o://3f1df96bec469d5c2518c3516461fee935342a6eb2f6de0d71a00e3cd6a36abf" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.852746 5058 generic.go:334] "Generic (PLEG): container finished" podID="95fc1e78-622a-4acc-865a-0f9e95bac6b3" containerID="ccd63adf9ff2533f48ef01689c4ae84b7ad678fd3261c983558f6e60e740f9b0" exitCode=137 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.852825 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f857ad1b0d365e9ccd06321990a5d512ac6e1af579aac229676780afe20e5f8" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.854562 5058 generic.go:334] "Generic (PLEG): container finished" podID="61117082-2145-4117-9de9-d283039a6d7d" containerID="3dd806326c4ebc88f937e535e0184dd39275bf7daaa1cdd5ccba019323c49240" exitCode=143 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.854652 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61117082-2145-4117-9de9-d283039a6d7d","Type":"ContainerDied","Data":"3dd806326c4ebc88f937e535e0184dd39275bf7daaa1cdd5ccba019323c49240"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.855427 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-96d5856f6-5t2b2"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.855808 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-96d5856f6-5t2b2" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api-log" containerID="cri-o://3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.855860 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-96d5856f6-5t2b2" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api" containerID="cri-o://53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.858087 5058 generic.go:334] "Generic (PLEG): container finished" podID="c560dc37-2c84-4468-800c-d90d8f8c158e" containerID="2182c2e8effad0274964ae00b87d07ae2b2c94468e266f8b6506ca45b077abb0" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.858196 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" event={"ID":"c560dc37-2c84-4468-800c-d90d8f8c158e","Type":"ContainerDied","Data":"2182c2e8effad0274964ae00b87d07ae2b2c94468e266f8b6506ca45b077abb0"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.858226 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" event={"ID":"c560dc37-2c84-4468-800c-d90d8f8c158e","Type":"ContainerDied","Data":"1d8ca77affb1c53189d5dd01162ae1ce7a88de22e8f47b3d12c6cc48947662d5"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.858237 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8ca77affb1c53189d5dd01162ae1ce7a88de22e8f47b3d12c6cc48947662d5" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.863484 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4x26x_fda633cb-5e3d-42da-b21e-be6d5a984f2f/openstack-network-exporter/0.log" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.863650 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4x26x" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.865270 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4x26x" event={"ID":"fda633cb-5e3d-42da-b21e-be6d5a984f2f","Type":"ContainerDied","Data":"43874a1d9610423e73e94af8b0fd491dded5237be0e2f524406b4d1b9908adf1"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.865333 5058 scope.go:117] "RemoveContainer" containerID="dec221ebaaab946f5f28317e8fedd374c12dcc709014570c2b792f667fb91617" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.876768 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.876840 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.876852 5058 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d9700c-c7fa-4020-939c-ce42c1b3afe8-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.881080 5058 generic.go:334] "Generic (PLEG): container finished" podID="78ebf4e1-4257-4a45-a776-734a016d955f" containerID="968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.881150 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"78ebf4e1-4257-4a45-a776-734a016d955f","Type":"ContainerDied","Data":"968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.883195 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a5a36ee5-eb19-4390-a07e-3e22a191ad58/ovsdbserver-sb/0.log" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.883256 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a5a36ee5-eb19-4390-a07e-3e22a191ad58","Type":"ContainerDied","Data":"4ee8bd737ec846dff3c50189fe21187706e706b23a4049bf76b8bf258b3b9ea4"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.883279 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ee8bd737ec846dff3c50189fe21187706e706b23a4049bf76b8bf258b3b9ea4" Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.884340 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance3d16-account-delete-zwlpf" event={"ID":"1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409","Type":"ContainerStarted","Data":"30ac7af819906fa5ff8bff2ceaf9f49f1bc0e4983d612d2e827bb46d71cd6df0"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.897977 5058 generic.go:334] "Generic (PLEG): container finished" podID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerID="a1c6b82cefdf4862cb066b04eb626d0455c0c4fe043784013c88eec253145cfa" exitCode=143 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.898059 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ae7394b-1174-4a56-96d5-7fe0598e1343","Type":"ContainerDied","Data":"a1c6b82cefdf4862cb066b04eb626d0455c0c4fe043784013c88eec253145cfa"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.899447 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.911937 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vqb94"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.920438 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vqb94"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927114 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="f29643cf82745f7e73e78c4135a5914e8cf5724cd7128107573a6ee8be46e9b0" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927270 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="3f71ae0f3608a3c488d1248099617a68fad24788a23a6eb403537e7c705bc211" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927323 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="33d2eddcb88d038d345ef3bf102f653431e0f85ff697aaaf69ad45de80bb0ebc" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927371 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="509b41a491460621f4e48c030ec3096e244144015e697f031ef0d4a24b2efb95" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927416 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="6e399dec260edbe3841b098d9b9f1bf0dfe804d3a5aa01facc0ec4b2d640b130" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927488 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="343434d65ccc14910a7fcbead2fb88c5672e280565333bc8a1847ece2eded17c" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927540 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="96660315f94d74277f9b50658993503a1b43f9bef9a95f56461f122ccec07ad5" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927586 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="537911dbd262660ec60f6e9b0405d1a9f83ab10683dad7e3074f39dc6c1a11e4" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927630 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="431d79aac23d72bdbe76355998d8a1a48b5be7faa4f9c5fcb39335a05f1f4018" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927674 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="1597a4004ccd7362f11728ae0a39556e442ff02bd29578ce9c6c13e045eb6f2a" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927726 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="54425f7ca236aef845e21698b62205fe14abb46d0ffcc6865557173aab0f934a" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927780 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="a3e3288d52021b1c5dee76e985c4bf2ef23c6520d23e5ef4eb02fe977bc949b4" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927840 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="6d0292a646c2aabd8315d14682a37635c0746d89b568115520d8211b6b5b0c40" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927885 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="06b49fc229b22caf4d176c8d244f32c702aa60ee0c1bc3f0ee68302b65265d19" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.927253 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"f29643cf82745f7e73e78c4135a5914e8cf5724cd7128107573a6ee8be46e9b0"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928036 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"3f71ae0f3608a3c488d1248099617a68fad24788a23a6eb403537e7c705bc211"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928105 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"33d2eddcb88d038d345ef3bf102f653431e0f85ff697aaaf69ad45de80bb0ebc"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928159 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928217 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"509b41a491460621f4e48c030ec3096e244144015e697f031ef0d4a24b2efb95"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928268 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"6e399dec260edbe3841b098d9b9f1bf0dfe804d3a5aa01facc0ec4b2d640b130"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928319 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"343434d65ccc14910a7fcbead2fb88c5672e280565333bc8a1847ece2eded17c"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928370 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"96660315f94d74277f9b50658993503a1b43f9bef9a95f56461f122ccec07ad5"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928428 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"537911dbd262660ec60f6e9b0405d1a9f83ab10683dad7e3074f39dc6c1a11e4"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928479 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"431d79aac23d72bdbe76355998d8a1a48b5be7faa4f9c5fcb39335a05f1f4018"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928529 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"1597a4004ccd7362f11728ae0a39556e442ff02bd29578ce9c6c13e045eb6f2a"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928581 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"54425f7ca236aef845e21698b62205fe14abb46d0ffcc6865557173aab0f934a"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928631 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"a3e3288d52021b1c5dee76e985c4bf2ef23c6520d23e5ef4eb02fe977bc949b4"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928755 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"6d0292a646c2aabd8315d14682a37635c0746d89b568115520d8211b6b5b0c40"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928882 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"06b49fc229b22caf4d176c8d244f32c702aa60ee0c1bc3f0ee68302b65265d19"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.928774 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="195c5812-232a-4b0e-9c5d-ba4b1af44126" containerName="nova-cell0-conductor-conductor" containerID="cri-o://1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.940849 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.948688 5058 generic.go:334] "Generic (PLEG): container finished" podID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" exitCode=0 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.952614 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wskz9" event={"ID":"9fd6e5f9-1445-4903-9025-d468f23f82d4","Type":"ContainerDied","Data":"3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.952628 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" containerName="rabbitmq" containerID="cri-o://c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4" gracePeriod=604800 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.965197 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qk69x"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.965258 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.965465 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="5c7b2987-5495-4f39-a2d0-0c2461b31e1e" containerName="nova-cell1-conductor-conductor" containerID="cri-o://84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.980478 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qk69x"] Oct 14 07:10:48 crc kubenswrapper[5058]: E1014 07:10:48.980509 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 14 07:10:48 crc kubenswrapper[5058]: E1014 07:10:48.980580 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data podName:59f969a6-6fea-40c8-9254-284205f5b3ea nodeName:}" failed. No retries permitted until 2025-10-14 07:10:50.980561208 +0000 UTC m=+1398.891645014 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data") pod "rabbitmq-server-0" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea") : configmap "rabbitmq-config-data" not found Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.986216 5058 generic.go:334] "Generic (PLEG): container finished" podID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerID="6445c8e5022085e59b4cf30be5b9f660e7e4d84d81a5dc48c1a2c637c48a4414" exitCode=143 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.987970 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c888fb598-6htl5" event={"ID":"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9","Type":"ContainerDied","Data":"6445c8e5022085e59b4cf30be5b9f660e7e4d84d81a5dc48c1a2c637c48a4414"} Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.993637 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.993919 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6771f490-1ad7-4f61-98a0-d52df29ef7c8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://524cb89b86eddc16eadfdb3dea364ca4de5b74f0c46bcb1045cef2074f40a15e" gracePeriod=30 Oct 14 07:10:48 crc kubenswrapper[5058]: I1014 07:10:48.996152 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="59f969a6-6fea-40c8-9254-284205f5b3ea" containerName="rabbitmq" containerID="cri-o://006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d" gracePeriod=604800 Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.033489 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-54cccf7b97-zh6kt" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.169:8080/healthcheck\": dial tcp 10.217.0.169:8080: connect: connection refused" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.034874 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-54cccf7b97-zh6kt" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.169:8080/healthcheck\": dial tcp 10.217.0.169:8080: connect: connection refused" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.042008 5058 generic.go:334] "Generic (PLEG): container finished" podID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerID="13b2f201f21fe9a64c31bfba7e902dfeaa6156cf92a7a9d10dc0f0276012bbec" exitCode=143 Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.042406 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e04a9c0-9542-49e8-b185-87a9ab267e7a","Type":"ContainerDied","Data":"13b2f201f21fe9a64c31bfba7e902dfeaa6156cf92a7a9d10dc0f0276012bbec"} Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.054858 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.055711 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a5a36ee5-eb19-4390-a07e-3e22a191ad58/ovsdbserver-sb/0.log" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.055783 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.061653 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-4x26x"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.064826 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4gftc" event={"ID":"6faed14d-d25e-43b8-96db-c64b6b3feece","Type":"ContainerDied","Data":"7d7a6b93b1b7c173d5b833f758f9253e768cb913ec9d5db1349828e1f593605f"} Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.065010 5058 scope.go:117] "RemoveContainer" containerID="3ed36db0ed54ce5c7569a1e901ed5f51357aac4b7b3444aec8115e75aced97c2" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.065177 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4gftc" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.066330 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-4x26x"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.081920 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.082245 5058 generic.go:334] "Generic (PLEG): container finished" podID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerID="90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b" exitCode=143 Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.082277 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e6c9e27-cbe7-4371-8c0c-614755871b4e","Type":"ContainerDied","Data":"90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b"} Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.091344 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder0800-account-delete-rglnm" event={"ID":"9a92fd66-33d0-4026-9f89-4fe1b7a57478","Type":"ContainerStarted","Data":"d73a4d2f198c70919a3502f6e9b9e5e60571d661c80f01e1798772c64bfd9a59"} Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.117013 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6b8d77dbf7-m7vgh" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.123218 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_74d9700c-c7fa-4020-939c-ce42c1b3afe8/ovsdbserver-nb/0.log" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.123311 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"74d9700c-c7fa-4020-939c-ce42c1b3afe8","Type":"ContainerDied","Data":"cfbb8fc020bd5c7014360f2430cf5bf213f4b3a02fb43797a28fd4ed195bae5a"} Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.123408 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.151974 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicanb71d-account-delete-xt45s"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.155667 5058 generic.go:334] "Generic (PLEG): container finished" podID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerID="75cd20c0073237cb2ae10b086a89b60f8da343d852dbb0faa5aaa253f9871fe4" exitCode=143 Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.155714 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc792cd6-15f5-4ef1-a383-4cecacce0df3","Type":"ContainerDied","Data":"75cd20c0073237cb2ae10b086a89b60f8da343d852dbb0faa5aaa253f9871fe4"} Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.172915 5058 scope.go:117] "RemoveContainer" containerID="49e8270533c6a8d714a36b9d0639d16b7a4f6bbe36fd6e76e2691778fef147d6" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.173055 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4gftc"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.179237 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4gftc"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184226 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-swift-storage-0\") pod \"c560dc37-2c84-4468-800c-d90d8f8c158e\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184266 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67h8x\" (UniqueName: \"kubernetes.io/projected/c560dc37-2c84-4468-800c-d90d8f8c158e-kube-api-access-67h8x\") pod \"c560dc37-2c84-4468-800c-d90d8f8c158e\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184329 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-combined-ca-bundle\") pod \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184369 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdb-rundir\") pod \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184383 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-nb\") pod \"c560dc37-2c84-4468-800c-d90d8f8c158e\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184425 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-combined-ca-bundle\") pod \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184456 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-svc\") pod \"c560dc37-2c84-4468-800c-d90d8f8c158e\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184491 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config\") pod \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184516 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg8bl\" (UniqueName: \"kubernetes.io/projected/a5a36ee5-eb19-4390-a07e-3e22a191ad58-kube-api-access-lg8bl\") pod \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184532 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-scripts\") pod \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184553 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184596 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-metrics-certs-tls-certs\") pod \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184624 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-sb\") pod \"c560dc37-2c84-4468-800c-d90d8f8c158e\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184644 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-config\") pod \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184684 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config-secret\") pod \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184713 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdbserver-sb-tls-certs\") pod \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\" (UID: \"a5a36ee5-eb19-4390-a07e-3e22a191ad58\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184737 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtmgt\" (UniqueName: \"kubernetes.io/projected/95fc1e78-622a-4acc-865a-0f9e95bac6b3-kube-api-access-mtmgt\") pod \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\" (UID: \"95fc1e78-622a-4acc-865a-0f9e95bac6b3\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.184756 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-config\") pod \"c560dc37-2c84-4468-800c-d90d8f8c158e\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.186413 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a5a36ee5-eb19-4390-a07e-3e22a191ad58" (UID: "a5a36ee5-eb19-4390-a07e-3e22a191ad58"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.193683 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.193726 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.194626 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c560dc37-2c84-4468-800c-d90d8f8c158e-kube-api-access-67h8x" (OuterVolumeSpecName: "kube-api-access-67h8x") pod "c560dc37-2c84-4468-800c-d90d8f8c158e" (UID: "c560dc37-2c84-4468-800c-d90d8f8c158e"). InnerVolumeSpecName "kube-api-access-67h8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.195763 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-scripts" (OuterVolumeSpecName: "scripts") pod "a5a36ee5-eb19-4390-a07e-3e22a191ad58" (UID: "a5a36ee5-eb19-4390-a07e-3e22a191ad58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.196625 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-config" (OuterVolumeSpecName: "config") pod "a5a36ee5-eb19-4390-a07e-3e22a191ad58" (UID: "a5a36ee5-eb19-4390-a07e-3e22a191ad58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.197124 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a5a36ee5-eb19-4390-a07e-3e22a191ad58" (UID: "a5a36ee5-eb19-4390-a07e-3e22a191ad58"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.232575 5058 scope.go:117] "RemoveContainer" containerID="ddd9d9608be750d69855a085c54444295e513581feb904e658f9daaf62f4edf6" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.233141 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fc1e78-622a-4acc-865a-0f9e95bac6b3-kube-api-access-mtmgt" (OuterVolumeSpecName: "kube-api-access-mtmgt") pod "95fc1e78-622a-4acc-865a-0f9e95bac6b3" (UID: "95fc1e78-622a-4acc-865a-0f9e95bac6b3"). InnerVolumeSpecName "kube-api-access-mtmgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.235710 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a36ee5-eb19-4390-a07e-3e22a191ad58-kube-api-access-lg8bl" (OuterVolumeSpecName: "kube-api-access-lg8bl") pod "a5a36ee5-eb19-4390-a07e-3e22a191ad58" (UID: "a5a36ee5-eb19-4390-a07e-3e22a191ad58"). InnerVolumeSpecName "kube-api-access-lg8bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.247474 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5a36ee5-eb19-4390-a07e-3e22a191ad58" (UID: "a5a36ee5-eb19-4390-a07e-3e22a191ad58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.265844 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron9e44-account-delete-z6nhv"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.269132 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95fc1e78-622a-4acc-865a-0f9e95bac6b3" (UID: "95fc1e78-622a-4acc-865a-0f9e95bac6b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.288270 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67h8x\" (UniqueName: \"kubernetes.io/projected/c560dc37-2c84-4468-800c-d90d8f8c158e-kube-api-access-67h8x\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.288306 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.288315 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.288323 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.288332 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg8bl\" (UniqueName: \"kubernetes.io/projected/a5a36ee5-eb19-4390-a07e-3e22a191ad58-kube-api-access-lg8bl\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.288340 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.288369 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.288378 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a36ee5-eb19-4390-a07e-3e22a191ad58-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.288388 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtmgt\" (UniqueName: \"kubernetes.io/projected/95fc1e78-622a-4acc-865a-0f9e95bac6b3-kube-api-access-mtmgt\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.331031 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c560dc37-2c84-4468-800c-d90d8f8c158e" (UID: "c560dc37-2c84-4468-800c-d90d8f8c158e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.350922 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "95fc1e78-622a-4acc-865a-0f9e95bac6b3" (UID: "95fc1e78-622a-4acc-865a-0f9e95bac6b3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.368779 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-config" (OuterVolumeSpecName: "config") pod "c560dc37-2c84-4468-800c-d90d8f8c158e" (UID: "c560dc37-2c84-4468-800c-d90d8f8c158e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.389569 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.389599 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.389609 5058 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.410171 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c560dc37-2c84-4468-800c-d90d8f8c158e" (UID: "c560dc37-2c84-4468-800c-d90d8f8c158e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.439586 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a5a36ee5-eb19-4390-a07e-3e22a191ad58" (UID: "a5a36ee5-eb19-4390-a07e-3e22a191ad58"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.453289 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.480321 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c560dc37-2c84-4468-800c-d90d8f8c158e" (UID: "c560dc37-2c84-4468-800c-d90d8f8c158e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.484542 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "95fc1e78-622a-4acc-865a-0f9e95bac6b3" (UID: "95fc1e78-622a-4acc-865a-0f9e95bac6b3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.483390 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c560dc37-2c84-4468-800c-d90d8f8c158e" (UID: "c560dc37-2c84-4468-800c-d90d8f8c158e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.499270 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-sb\") pod \"c560dc37-2c84-4468-800c-d90d8f8c158e\" (UID: \"c560dc37-2c84-4468-800c-d90d8f8c158e\") " Oct 14 07:10:49 crc kubenswrapper[5058]: W1014 07:10:49.499421 5058 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c560dc37-2c84-4468-800c-d90d8f8c158e/volumes/kubernetes.io~configmap/ovsdbserver-sb Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.499449 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c560dc37-2c84-4468-800c-d90d8f8c158e" (UID: "c560dc37-2c84-4468-800c-d90d8f8c158e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.502146 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.502253 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.502329 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95fc1e78-622a-4acc-865a-0f9e95bac6b3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.502385 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.502452 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.502506 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c560dc37-2c84-4468-800c-d90d8f8c158e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.587684 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a5a36ee5-eb19-4390-a07e-3e22a191ad58" (UID: "a5a36ee5-eb19-4390-a07e-3e22a191ad58"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.604311 5058 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a36ee5-eb19-4390-a07e-3e22a191ad58-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:49 crc kubenswrapper[5058]: E1014 07:10:49.785485 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:10:49 crc kubenswrapper[5058]: E1014 07:10:49.787162 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:10:49 crc kubenswrapper[5058]: E1014 07:10:49.787683 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:10:49 crc kubenswrapper[5058]: E1014 07:10:49.787715 5058 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" Oct 14 07:10:49 crc kubenswrapper[5058]: E1014 07:10:49.791045 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:10:49 crc kubenswrapper[5058]: E1014 07:10:49.792422 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:10:49 crc kubenswrapper[5058]: E1014 07:10:49.797335 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:10:49 crc kubenswrapper[5058]: E1014 07:10:49.797404 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.927749 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.944126 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.944414 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="ceilometer-central-agent" containerID="cri-o://133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80" gracePeriod=30 Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.944576 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="proxy-httpd" containerID="cri-o://45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190" gracePeriod=30 Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.944626 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="sg-core" containerID="cri-o://6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c" gracePeriod=30 Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.944706 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="ceilometer-notification-agent" containerID="cri-o://13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09" gracePeriod=30 Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.996805 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:10:49 crc kubenswrapper[5058]: I1014 07:10:49.996999 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="67eb8142-3269-43d9-a2bc-a6afbaa991f9" containerName="kube-state-metrics" containerID="cri-o://76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230" gracePeriod=30 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.005183 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.009572 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.011518 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78ebf4e1-4257-4a45-a776-734a016d955f-etc-machine-id\") pod \"78ebf4e1-4257-4a45-a776-734a016d955f\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.011605 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-scripts\") pod \"78ebf4e1-4257-4a45-a776-734a016d955f\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.011627 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-combined-ca-bundle\") pod \"78ebf4e1-4257-4a45-a776-734a016d955f\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.011658 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data-custom\") pod \"78ebf4e1-4257-4a45-a776-734a016d955f\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.011787 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data\") pod \"78ebf4e1-4257-4a45-a776-734a016d955f\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.011856 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgkm5\" (UniqueName: \"kubernetes.io/projected/78ebf4e1-4257-4a45-a776-734a016d955f-kube-api-access-mgkm5\") pod \"78ebf4e1-4257-4a45-a776-734a016d955f\" (UID: \"78ebf4e1-4257-4a45-a776-734a016d955f\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.019858 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78ebf4e1-4257-4a45-a776-734a016d955f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "78ebf4e1-4257-4a45-a776-734a016d955f" (UID: "78ebf4e1-4257-4a45-a776-734a016d955f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.024840 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-scripts" (OuterVolumeSpecName: "scripts") pod "78ebf4e1-4257-4a45-a776-734a016d955f" (UID: "78ebf4e1-4257-4a45-a776-734a016d955f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.039402 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ebf4e1-4257-4a45-a776-734a016d955f-kube-api-access-mgkm5" (OuterVolumeSpecName: "kube-api-access-mgkm5") pod "78ebf4e1-4257-4a45-a776-734a016d955f" (UID: "78ebf4e1-4257-4a45-a776-734a016d955f"). InnerVolumeSpecName "kube-api-access-mgkm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.047924 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "78ebf4e1-4257-4a45-a776-734a016d955f" (UID: "78ebf4e1-4257-4a45-a776-734a016d955f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114065 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-run-httpd\") pod \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114109 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"133d4cdf-58ed-4544-8f05-328587a2b701\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114128 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-operator-scripts\") pod \"133d4cdf-58ed-4544-8f05-328587a2b701\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114177 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-config-data\") pod \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114201 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-combined-ca-bundle\") pod \"133d4cdf-58ed-4544-8f05-328587a2b701\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114220 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-generated\") pod \"133d4cdf-58ed-4544-8f05-328587a2b701\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114236 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-secrets\") pod \"133d4cdf-58ed-4544-8f05-328587a2b701\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114251 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-log-httpd\") pod \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114274 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-kolla-config\") pod \"133d4cdf-58ed-4544-8f05-328587a2b701\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114339 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-default\") pod \"133d4cdf-58ed-4544-8f05-328587a2b701\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114358 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-internal-tls-certs\") pod \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114374 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-public-tls-certs\") pod \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114403 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-etc-swift\") pod \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114424 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wb5c\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-kube-api-access-9wb5c\") pod \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114441 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxtv5\" (UniqueName: \"kubernetes.io/projected/133d4cdf-58ed-4544-8f05-328587a2b701-kube-api-access-fxtv5\") pod \"133d4cdf-58ed-4544-8f05-328587a2b701\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114474 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-combined-ca-bundle\") pod \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\" (UID: \"0820570e-22cb-4f0b-9ee4-f5237ccdfdff\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.114497 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-galera-tls-certs\") pod \"133d4cdf-58ed-4544-8f05-328587a2b701\" (UID: \"133d4cdf-58ed-4544-8f05-328587a2b701\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.129543 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0820570e-22cb-4f0b-9ee4-f5237ccdfdff" (UID: "0820570e-22cb-4f0b-9ee4-f5237ccdfdff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.129856 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0820570e-22cb-4f0b-9ee4-f5237ccdfdff" (UID: "0820570e-22cb-4f0b-9ee4-f5237ccdfdff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.131426 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "133d4cdf-58ed-4544-8f05-328587a2b701" (UID: "133d4cdf-58ed-4544-8f05-328587a2b701"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.131998 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "133d4cdf-58ed-4544-8f05-328587a2b701" (UID: "133d4cdf-58ed-4544-8f05-328587a2b701"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.140383 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "133d4cdf-58ed-4544-8f05-328587a2b701" (UID: "133d4cdf-58ed-4544-8f05-328587a2b701"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.140788 5058 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/78ebf4e1-4257-4a45-a776-734a016d955f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.140822 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.140831 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.140840 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.140860 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.140869 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-default\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.140878 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgkm5\" (UniqueName: \"kubernetes.io/projected/78ebf4e1-4257-4a45-a776-734a016d955f-kube-api-access-mgkm5\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.141497 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "133d4cdf-58ed-4544-8f05-328587a2b701" (UID: "133d4cdf-58ed-4544-8f05-328587a2b701"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.149938 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-secrets" (OuterVolumeSpecName: "secrets") pod "133d4cdf-58ed-4544-8f05-328587a2b701" (UID: "133d4cdf-58ed-4544-8f05-328587a2b701"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.217116 5058 generic.go:334] "Generic (PLEG): container finished" podID="1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409" containerID="8d666c7d69d97f26522cf42522e4b969f3cf0766d22881eccdc50dade395b4d4" exitCode=0 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.217640 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance3d16-account-delete-zwlpf" event={"ID":"1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409","Type":"ContainerDied","Data":"8d666c7d69d97f26522cf42522e4b969f3cf0766d22881eccdc50dade395b4d4"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.233343 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0820570e-22cb-4f0b-9ee4-f5237ccdfdff" (UID: "0820570e-22cb-4f0b-9ee4-f5237ccdfdff"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.233621 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-kube-api-access-9wb5c" (OuterVolumeSpecName: "kube-api-access-9wb5c") pod "0820570e-22cb-4f0b-9ee4-f5237ccdfdff" (UID: "0820570e-22cb-4f0b-9ee4-f5237ccdfdff"). InnerVolumeSpecName "kube-api-access-9wb5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.235246 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133d4cdf-58ed-4544-8f05-328587a2b701-kube-api-access-fxtv5" (OuterVolumeSpecName: "kube-api-access-fxtv5") pod "133d4cdf-58ed-4544-8f05-328587a2b701" (UID: "133d4cdf-58ed-4544-8f05-328587a2b701"). InnerVolumeSpecName "kube-api-access-fxtv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.238127 5058 generic.go:334] "Generic (PLEG): container finished" podID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerID="9ee6ab347aa76c2029fca9b728228a4e733b2d2b1b8c42b5e8bdb632ac615717" exitCode=143 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.238260 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" event={"ID":"ef71f1de-0039-4574-955a-ce760eb7ea3e","Type":"ContainerDied","Data":"9ee6ab347aa76c2029fca9b728228a4e733b2d2b1b8c42b5e8bdb632ac615717"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.241160 5058 generic.go:334] "Generic (PLEG): container finished" podID="9a92fd66-33d0-4026-9f89-4fe1b7a57478" containerID="ab9697d928a356b03948eeb876bb51f7470c3eecb5eb1ff8a3162b50c9956aed" exitCode=0 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.241253 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder0800-account-delete-rglnm" event={"ID":"9a92fd66-33d0-4026-9f89-4fe1b7a57478","Type":"ContainerDied","Data":"ab9697d928a356b03948eeb876bb51f7470c3eecb5eb1ff8a3162b50c9956aed"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.242466 5058 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.242480 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wb5c\" (UniqueName: \"kubernetes.io/projected/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-kube-api-access-9wb5c\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.242489 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxtv5\" (UniqueName: \"kubernetes.io/projected/133d4cdf-58ed-4544-8f05-328587a2b701-kube-api-access-fxtv5\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.242498 5058 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-operator-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.242506 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/133d4cdf-58ed-4544-8f05-328587a2b701-config-data-generated\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.242513 5058 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.242521 5058 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/133d4cdf-58ed-4544-8f05-328587a2b701-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.290869 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.291098 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="c2af2c14-fb00-45d0-8414-8754189455a0" containerName="memcached" containerID="cri-o://009dfda92aa7d854c85932f7c1982d950ec47b6a6ac6fe07c1206850d5895921" gracePeriod=30 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.292047 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "133d4cdf-58ed-4544-8f05-328587a2b701" (UID: "133d4cdf-58ed-4544-8f05-328587a2b701"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.311322 5058 generic.go:334] "Generic (PLEG): container finished" podID="133d4cdf-58ed-4544-8f05-328587a2b701" containerID="9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0" exitCode=0 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.311727 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"133d4cdf-58ed-4544-8f05-328587a2b701","Type":"ContainerDied","Data":"9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.311762 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"133d4cdf-58ed-4544-8f05-328587a2b701","Type":"ContainerDied","Data":"0cc5f9227cc192572877a2f92d8eda5a6e4efbb776baba69c6b8b5deecca2074"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.311781 5058 scope.go:117] "RemoveContainer" containerID="9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.311950 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.356335 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.356906 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.356956 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data podName:b753342c-4a7e-4bf6-809a-3c5bc083ba6a nodeName:}" failed. No retries permitted until 2025-10-14 07:10:54.356940804 +0000 UTC m=+1402.268024610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data") pod "rabbitmq-cell1-server-0" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a") : configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.361420 5058 generic.go:334] "Generic (PLEG): container finished" podID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerID="3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c" exitCode=143 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.361503 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-96d5856f6-5t2b2" event={"ID":"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c","Type":"ContainerDied","Data":"3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.395269 5058 generic.go:334] "Generic (PLEG): container finished" podID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerID="83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a" exitCode=0 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.395331 5058 generic.go:334] "Generic (PLEG): container finished" podID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerID="144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b" exitCode=0 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.395467 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54cccf7b97-zh6kt" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.396221 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54cccf7b97-zh6kt" event={"ID":"0820570e-22cb-4f0b-9ee4-f5237ccdfdff","Type":"ContainerDied","Data":"83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.396251 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54cccf7b97-zh6kt" event={"ID":"0820570e-22cb-4f0b-9ee4-f5237ccdfdff","Type":"ContainerDied","Data":"144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.396261 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54cccf7b97-zh6kt" event={"ID":"0820570e-22cb-4f0b-9ee4-f5237ccdfdff","Type":"ContainerDied","Data":"d7b00d866f7d27abab215d0c3e4446f03c69f89376adc0e46b0389a7ca95b08f"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.417130 5058 scope.go:117] "RemoveContainer" containerID="63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.435503 5058 generic.go:334] "Generic (PLEG): container finished" podID="6771f490-1ad7-4f61-98a0-d52df29ef7c8" containerID="524cb89b86eddc16eadfdb3dea364ca4de5b74f0c46bcb1045cef2074f40a15e" exitCode=0 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.435590 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6771f490-1ad7-4f61-98a0-d52df29ef7c8","Type":"ContainerDied","Data":"524cb89b86eddc16eadfdb3dea364ca4de5b74f0c46bcb1045cef2074f40a15e"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.444339 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2nnd9"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.445118 5058 generic.go:334] "Generic (PLEG): container finished" podID="df614c68-4293-4c6c-a18e-e87dfcf8fe26" containerID="2c928056e43e35e805ee93793ad01965db7b4f5371b64ad8b2405baadb1b7cb2" exitCode=0 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.445167 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicanb71d-account-delete-xt45s" event={"ID":"df614c68-4293-4c6c-a18e-e87dfcf8fe26","Type":"ContainerDied","Data":"2c928056e43e35e805ee93793ad01965db7b4f5371b64ad8b2405baadb1b7cb2"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.445187 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicanb71d-account-delete-xt45s" event={"ID":"df614c68-4293-4c6c-a18e-e87dfcf8fe26","Type":"ContainerStarted","Data":"54471da1bb3ebfe2932f3acd67b2d8d737ee6b8f0ccd91ff487b0ffa0ae23fbb"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.449198 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.452194 5058 generic.go:334] "Generic (PLEG): container finished" podID="78ebf4e1-4257-4a45-a776-734a016d955f" containerID="3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf" exitCode=0 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.452262 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"78ebf4e1-4257-4a45-a776-734a016d955f","Type":"ContainerDied","Data":"3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.452287 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"78ebf4e1-4257-4a45-a776-734a016d955f","Type":"ContainerDied","Data":"9398d9b7ed17d8f16f3f4facee3c7ca860ebac6ecedac96fea5e4f9c7afe65c3"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.452328 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.469009 5058 generic.go:334] "Generic (PLEG): container finished" podID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerID="6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c" exitCode=2 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.469086 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerDied","Data":"6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.475239 5058 generic.go:334] "Generic (PLEG): container finished" podID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerID="d8be8bca483382edada5af0684b9c15ed30003719e5ed4f96a64a29209b1930d" exitCode=143 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.475312 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76589ffb55-8wb6q" event={"ID":"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27","Type":"ContainerDied","Data":"d8be8bca483382edada5af0684b9c15ed30003719e5ed4f96a64a29209b1930d"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.476586 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron9e44-account-delete-z6nhv" event={"ID":"b8fe9fcd-194a-45e1-b1c3-08bdd5331810","Type":"ContainerStarted","Data":"1e22a6de805c0f1070636425bfdece80bed08a0c65a065675019a6bcbc3a83bc"} Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.476770 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron9e44-account-delete-z6nhv" podUID="b8fe9fcd-194a-45e1-b1c3-08bdd5331810" containerName="mariadb-account-delete" containerID="cri-o://ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322" gracePeriod=30 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.479166 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.479253 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78ebf4e1-4257-4a45-a776-734a016d955f" (UID: "78ebf4e1-4257-4a45-a776-734a016d955f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.479500 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bf758599-p6h9m" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.479946 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.483563 5058 scope.go:117] "RemoveContainer" containerID="9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0" Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.487751 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0\": container with ID starting with 9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0 not found: ID does not exist" containerID="9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.487787 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0"} err="failed to get container status \"9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0\": rpc error: code = NotFound desc = could not find container \"9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0\": container with ID starting with 9de9ca0ba1cba663c687ee278c6adb5943b0bd1e8a4440535b54d9dc747171b0 not found: ID does not exist" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.487850 5058 scope.go:117] "RemoveContainer" containerID="63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c" Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.490228 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c\": container with ID starting with 63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c not found: ID does not exist" containerID="63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.490253 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c"} err="failed to get container status \"63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c\": rpc error: code = NotFound desc = could not find container \"63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c\": container with ID starting with 63ef4bb01b3d8358055d822b7317de0f1c923c06c93b7b299710ea105da4cf8c not found: ID does not exist" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.490268 5058 scope.go:117] "RemoveContainer" containerID="83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.494919 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.551168 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-dfvvs"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.551223 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2nnd9"] Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.557196 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.557845 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0820570e-22cb-4f0b-9ee4-f5237ccdfdff" (UID: "0820570e-22cb-4f0b-9ee4-f5237ccdfdff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.559238 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.560380 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.560427 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="5c7b2987-5495-4f39-a2d0-0c2461b31e1e" containerName="nova-cell1-conductor-conductor" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.561049 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-nova-novncproxy-tls-certs\") pod \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.561186 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nhqk\" (UniqueName: \"kubernetes.io/projected/6771f490-1ad7-4f61-98a0-d52df29ef7c8-kube-api-access-8nhqk\") pod \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.561311 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-config-data\") pod \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.561339 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-combined-ca-bundle\") pod \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.561436 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-vencrypt-tls-certs\") pod \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\" (UID: \"6771f490-1ad7-4f61-98a0-d52df29ef7c8\") " Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.562212 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.562257 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.562270 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.587943 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "133d4cdf-58ed-4544-8f05-328587a2b701" (UID: "133d4cdf-58ed-4544-8f05-328587a2b701"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.593430 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "133d4cdf-58ed-4544-8f05-328587a2b701" (UID: "133d4cdf-58ed-4544-8f05-328587a2b701"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.593510 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0820570e-22cb-4f0b-9ee4-f5237ccdfdff" (UID: "0820570e-22cb-4f0b-9ee4-f5237ccdfdff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.595436 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0820570e-22cb-4f0b-9ee4-f5237ccdfdff" (UID: "0820570e-22cb-4f0b-9ee4-f5237ccdfdff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.607359 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6771f490-1ad7-4f61-98a0-d52df29ef7c8-kube-api-access-8nhqk" (OuterVolumeSpecName: "kube-api-access-8nhqk") pod "6771f490-1ad7-4f61-98a0-d52df29ef7c8" (UID: "6771f490-1ad7-4f61-98a0-d52df29ef7c8"). InnerVolumeSpecName "kube-api-access-8nhqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.607636 5058 scope.go:117] "RemoveContainer" containerID="144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.618913 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-config-data" (OuterVolumeSpecName: "config-data") pod "0820570e-22cb-4f0b-9ee4-f5237ccdfdff" (UID: "0820570e-22cb-4f0b-9ee4-f5237ccdfdff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.622051 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-config-data" (OuterVolumeSpecName: "config-data") pod "6771f490-1ad7-4f61-98a0-d52df29ef7c8" (UID: "6771f490-1ad7-4f61-98a0-d52df29ef7c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.623367 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-dfvvs"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.624204 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data" (OuterVolumeSpecName: "config-data") pod "78ebf4e1-4257-4a45-a776-734a016d955f" (UID: "78ebf4e1-4257-4a45-a776-734a016d955f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.636955 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6db45f8b6f-vkgmc"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.637464 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6db45f8b6f-vkgmc" podUID="ed42eec9-b3be-4971-914c-dc5d6603b0e1" containerName="keystone-api" containerID="cri-o://6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a" gracePeriod=30 Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.669542 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.673732 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.673768 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.673781 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebf4e1-4257-4a45-a776-734a016d955f-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.673790 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.673811 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nhqk\" (UniqueName: \"kubernetes.io/projected/6771f490-1ad7-4f61-98a0-d52df29ef7c8-kube-api-access-8nhqk\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.673821 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0820570e-22cb-4f0b-9ee4-f5237ccdfdff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.673829 5058 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/133d4cdf-58ed-4544-8f05-328587a2b701-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.673838 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.694961 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6771f490-1ad7-4f61-98a0-d52df29ef7c8" (UID: "6771f490-1ad7-4f61-98a0-d52df29ef7c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.699931 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jcflc"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.703668 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "6771f490-1ad7-4f61-98a0-d52df29ef7c8" (UID: "6771f490-1ad7-4f61-98a0-d52df29ef7c8"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.708922 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jcflc"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.711730 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-fcaa-account-create-hfmrf"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.716083 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-fcaa-account-create-hfmrf"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.728520 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron9e44-account-delete-z6nhv" podStartSLOduration=4.728504208 podStartE2EDuration="4.728504208s" podCreationTimestamp="2025-10-14 07:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 07:10:50.566371513 +0000 UTC m=+1398.477455319" watchObservedRunningTime="2025-10-14 07:10:50.728504208 +0000 UTC m=+1398.639588014" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.734437 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "6771f490-1ad7-4f61-98a0-d52df29ef7c8" (UID: "6771f490-1ad7-4f61-98a0-d52df29ef7c8"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.740711 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65bf758599-p6h9m"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.745227 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65bf758599-p6h9m"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.748574 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.753057 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.777577 5058 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.777816 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.777850 5058 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6771f490-1ad7-4f61-98a0-d52df29ef7c8-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.805560 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15377fc0-cfba-460c-b865-e8fb66fea612" path="/var/lib/kubelet/pods/15377fc0-cfba-460c-b865-e8fb66fea612/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.806084 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f239f4-7737-4333-914b-8586ffd51201" path="/var/lib/kubelet/pods/50f239f4-7737-4333-914b-8586ffd51201/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.806560 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61eee2da-b885-4d29-a8c0-3ea27e89d53f" path="/var/lib/kubelet/pods/61eee2da-b885-4d29-a8c0-3ea27e89d53f/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.807070 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6faed14d-d25e-43b8-96db-c64b6b3feece" path="/var/lib/kubelet/pods/6faed14d-d25e-43b8-96db-c64b6b3feece/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.808264 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" path="/var/lib/kubelet/pods/74d9700c-c7fa-4020-939c-ce42c1b3afe8/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.809044 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b09a00-f000-4bcc-ab28-07699c6cbeb5" path="/var/lib/kubelet/pods/75b09a00-f000-4bcc-ab28-07699c6cbeb5/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.809513 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff99534-a94e-4306-ae3b-2381a0b8d029" path="/var/lib/kubelet/pods/7ff99534-a94e-4306-ae3b-2381a0b8d029/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.810037 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fc1e78-622a-4acc-865a-0f9e95bac6b3" path="/var/lib/kubelet/pods/95fc1e78-622a-4acc-865a-0f9e95bac6b3/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.810964 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c90a327-1079-4375-986f-7becd72aaaad" path="/var/lib/kubelet/pods/9c90a327-1079-4375-986f-7becd72aaaad/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.811542 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" path="/var/lib/kubelet/pods/a5a36ee5-eb19-4390-a07e-3e22a191ad58/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.812196 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c560dc37-2c84-4468-800c-d90d8f8c158e" path="/var/lib/kubelet/pods/c560dc37-2c84-4468-800c-d90d8f8c158e/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.815720 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda633cb-5e3d-42da-b21e-be6d5a984f2f" path="/var/lib/kubelet/pods/fda633cb-5e3d-42da-b21e-be6d5a984f2f/volumes" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.832402 5058 scope.go:117] "RemoveContainer" containerID="83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a" Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.833942 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a\": container with ID starting with 83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a not found: ID does not exist" containerID="83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.834001 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a"} err="failed to get container status \"83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a\": rpc error: code = NotFound desc = could not find container \"83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a\": container with ID starting with 83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a not found: ID does not exist" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.834050 5058 scope.go:117] "RemoveContainer" containerID="144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b" Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.835658 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b\": container with ID starting with 144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b not found: ID does not exist" containerID="144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.835788 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b"} err="failed to get container status \"144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b\": rpc error: code = NotFound desc = could not find container \"144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b\": container with ID starting with 144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b not found: ID does not exist" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.835859 5058 scope.go:117] "RemoveContainer" containerID="83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.844121 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a"} err="failed to get container status \"83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a\": rpc error: code = NotFound desc = could not find container \"83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a\": container with ID starting with 83108be88e5c84fbbc7e01b0fc1a95ef36dc8ae42811e70cf5564b5c8df77c0a not found: ID does not exist" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.844153 5058 scope.go:117] "RemoveContainer" containerID="144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.855785 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-54cccf7b97-zh6kt"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.856403 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b"} err="failed to get container status \"144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b\": rpc error: code = NotFound desc = could not find container \"144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b\": container with ID starting with 144a5e21715feac23a8e229916aae7487094b0921983f7ffb88895eb51aca89b not found: ID does not exist" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.856437 5058 scope.go:117] "RemoveContainer" containerID="968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.879008 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-54cccf7b97-zh6kt"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.892562 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.898919 5058 scope.go:117] "RemoveContainer" containerID="3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.903178 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.911141 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.914511 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.940748 5058 scope.go:117] "RemoveContainer" containerID="968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a" Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.941211 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a\": container with ID starting with 968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a not found: ID does not exist" containerID="968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.941240 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a"} err="failed to get container status \"968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a\": rpc error: code = NotFound desc = could not find container \"968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a\": container with ID starting with 968ac6b011947ec29049d7bcc9e2e505875ae1fc9f50bd7f84622b2ba9b4075a not found: ID does not exist" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.941265 5058 scope.go:117] "RemoveContainer" containerID="3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf" Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.941583 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf\": container with ID starting with 3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf not found: ID does not exist" containerID="3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.941602 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf"} err="failed to get container status \"3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf\": rpc error: code = NotFound desc = could not find container \"3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf\": container with ID starting with 3b9e35a31912ff2046f03a0954cc6993fa5b7403da50239de5442e4a1d812ecf not found: ID does not exist" Oct 14 07:10:50 crc kubenswrapper[5058]: I1014 07:10:50.976529 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder0800-account-delete-rglnm" Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.982178 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 14 07:10:50 crc kubenswrapper[5058]: E1014 07:10:50.982251 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data podName:59f969a6-6fea-40c8-9254-284205f5b3ea nodeName:}" failed. No retries permitted until 2025-10-14 07:10:54.982233347 +0000 UTC m=+1402.893317153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data") pod "rabbitmq-server-0" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea") : configmap "rabbitmq-config-data" not found Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.037883 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="ca7a9685-6d40-487b-aebf-f0a01ace044b" containerName="galera" containerID="cri-o://f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2" gracePeriod=30 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.075030 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanb71d-account-delete-xt45s" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.075848 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance3d16-account-delete-zwlpf" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.083904 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jd7h\" (UniqueName: \"kubernetes.io/projected/9a92fd66-33d0-4026-9f89-4fe1b7a57478-kube-api-access-5jd7h\") pod \"9a92fd66-33d0-4026-9f89-4fe1b7a57478\" (UID: \"9a92fd66-33d0-4026-9f89-4fe1b7a57478\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.101354 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a92fd66-33d0-4026-9f89-4fe1b7a57478-kube-api-access-5jd7h" (OuterVolumeSpecName: "kube-api-access-5jd7h") pod "9a92fd66-33d0-4026-9f89-4fe1b7a57478" (UID: "9a92fd66-33d0-4026-9f89-4fe1b7a57478"). InnerVolumeSpecName "kube-api-access-5jd7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.156008 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron9e44-account-delete-z6nhv" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.184017 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": read tcp 10.217.0.2:58432->10.217.0.164:8776: read: connection reset by peer" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.185981 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49q6m\" (UniqueName: \"kubernetes.io/projected/b8fe9fcd-194a-45e1-b1c3-08bdd5331810-kube-api-access-49q6m\") pod \"b8fe9fcd-194a-45e1-b1c3-08bdd5331810\" (UID: \"b8fe9fcd-194a-45e1-b1c3-08bdd5331810\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.186052 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9795\" (UniqueName: \"kubernetes.io/projected/1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409-kube-api-access-f9795\") pod \"1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409\" (UID: \"1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.186149 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcl98\" (UniqueName: \"kubernetes.io/projected/df614c68-4293-4c6c-a18e-e87dfcf8fe26-kube-api-access-kcl98\") pod \"df614c68-4293-4c6c-a18e-e87dfcf8fe26\" (UID: \"df614c68-4293-4c6c-a18e-e87dfcf8fe26\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.186636 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jd7h\" (UniqueName: \"kubernetes.io/projected/9a92fd66-33d0-4026-9f89-4fe1b7a57478-kube-api-access-5jd7h\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.190522 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409-kube-api-access-f9795" (OuterVolumeSpecName: "kube-api-access-f9795") pod "1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409" (UID: "1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409"). InnerVolumeSpecName "kube-api-access-f9795". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.191032 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df614c68-4293-4c6c-a18e-e87dfcf8fe26-kube-api-access-kcl98" (OuterVolumeSpecName: "kube-api-access-kcl98") pod "df614c68-4293-4c6c-a18e-e87dfcf8fe26" (UID: "df614c68-4293-4c6c-a18e-e87dfcf8fe26"). InnerVolumeSpecName "kube-api-access-kcl98". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.192898 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8fe9fcd-194a-45e1-b1c3-08bdd5331810-kube-api-access-49q6m" (OuterVolumeSpecName: "kube-api-access-49q6m") pod "b8fe9fcd-194a-45e1-b1c3-08bdd5331810" (UID: "b8fe9fcd-194a-45e1-b1c3-08bdd5331810"). InnerVolumeSpecName "kube-api-access-49q6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.287638 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49q6m\" (UniqueName: \"kubernetes.io/projected/b8fe9fcd-194a-45e1-b1c3-08bdd5331810-kube-api-access-49q6m\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.287679 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9795\" (UniqueName: \"kubernetes.io/projected/1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409-kube-api-access-f9795\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.287691 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcl98\" (UniqueName: \"kubernetes.io/projected/df614c68-4293-4c6c-a18e-e87dfcf8fe26-kube-api-access-kcl98\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: E1014 07:10:51.320487 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68 is running failed: container process not found" containerID="a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 07:10:51 crc kubenswrapper[5058]: E1014 07:10:51.320776 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68 is running failed: container process not found" containerID="a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 07:10:51 crc kubenswrapper[5058]: E1014 07:10:51.321098 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68 is running failed: container process not found" containerID="a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 07:10:51 crc kubenswrapper[5058]: E1014 07:10:51.321127 5058 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="18689556-e39f-4c5d-add8-aa934c468f1b" containerName="nova-scheduler-scheduler" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.322620 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.404318 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.434390 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-5c888fb598-6htl5" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.155:8778/\": read tcp 10.217.0.2:41064->10.217.0.155:8778: read: connection reset by peer" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.434704 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-5c888fb598-6htl5" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.155:8778/\": read tcp 10.217.0.2:41070->10.217.0.155:8778: read: connection reset by peer" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.493445 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxdjk\" (UniqueName: \"kubernetes.io/projected/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-api-access-mxdjk\") pod \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.493569 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-combined-ca-bundle\") pod \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.493602 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-config\") pod \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.493689 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-certs\") pod \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\" (UID: \"67eb8142-3269-43d9-a2bc-a6afbaa991f9\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.500759 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-api-access-mxdjk" (OuterVolumeSpecName: "kube-api-access-mxdjk") pod "67eb8142-3269-43d9-a2bc-a6afbaa991f9" (UID: "67eb8142-3269-43d9-a2bc-a6afbaa991f9"). InnerVolumeSpecName "kube-api-access-mxdjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.507427 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder0800-account-delete-rglnm" event={"ID":"9a92fd66-33d0-4026-9f89-4fe1b7a57478","Type":"ContainerDied","Data":"d73a4d2f198c70919a3502f6e9b9e5e60571d661c80f01e1798772c64bfd9a59"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.507472 5058 scope.go:117] "RemoveContainer" containerID="ab9697d928a356b03948eeb876bb51f7470c3eecb5eb1ff8a3162b50c9956aed" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.507563 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder0800-account-delete-rglnm" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.515455 5058 generic.go:334] "Generic (PLEG): container finished" podID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerID="f5d187c11c0f467f10e4343a5baa9bc8683d7c0ce7601c7a7a218b64e989e1ed" exitCode=0 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.515513 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc792cd6-15f5-4ef1-a383-4cecacce0df3","Type":"ContainerDied","Data":"f5d187c11c0f467f10e4343a5baa9bc8683d7c0ce7601c7a7a218b64e989e1ed"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.517705 5058 generic.go:334] "Generic (PLEG): container finished" podID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerID="3f22606015530d5cd831d677840b6de4c02774150161b7a98b4b65f28323dbc7" exitCode=0 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.517748 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c888fb598-6htl5" event={"ID":"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9","Type":"ContainerDied","Data":"3f22606015530d5cd831d677840b6de4c02774150161b7a98b4b65f28323dbc7"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.531601 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6771f490-1ad7-4f61-98a0-d52df29ef7c8","Type":"ContainerDied","Data":"620390d1f5623b444165eee47ddd7cc564d852ac532074bc916c9f920e6b64af"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.531663 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.532085 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "67eb8142-3269-43d9-a2bc-a6afbaa991f9" (UID: "67eb8142-3269-43d9-a2bc-a6afbaa991f9"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.544824 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61117082-2145-4117-9de9-d283039a6d7d","Type":"ContainerDied","Data":"ddf55f3c2bbe35fdc2ab51be906a947fa2e2fdff71a5c9edee6623ab01efa4ca"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.544787 5058 generic.go:334] "Generic (PLEG): container finished" podID="61117082-2145-4117-9de9-d283039a6d7d" containerID="ddf55f3c2bbe35fdc2ab51be906a947fa2e2fdff71a5c9edee6623ab01efa4ca" exitCode=0 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.558009 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.563332 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.567153 5058 scope.go:117] "RemoveContainer" containerID="524cb89b86eddc16eadfdb3dea364ca4de5b74f0c46bcb1045cef2074f40a15e" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.567529 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "67eb8142-3269-43d9-a2bc-a6afbaa991f9" (UID: "67eb8142-3269-43d9-a2bc-a6afbaa991f9"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.570466 5058 generic.go:334] "Generic (PLEG): container finished" podID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerID="45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190" exitCode=0 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.570546 5058 generic.go:334] "Generic (PLEG): container finished" podID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerID="133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80" exitCode=0 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.570524 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerDied","Data":"45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.570617 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerDied","Data":"133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.580132 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder0800-account-delete-rglnm"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.583977 5058 generic.go:334] "Generic (PLEG): container finished" podID="b8fe9fcd-194a-45e1-b1c3-08bdd5331810" containerID="ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322" exitCode=0 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.584034 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron9e44-account-delete-z6nhv" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.584055 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron9e44-account-delete-z6nhv" event={"ID":"b8fe9fcd-194a-45e1-b1c3-08bdd5331810","Type":"ContainerDied","Data":"1e22a6de805c0f1070636425bfdece80bed08a0c65a065675019a6bcbc3a83bc"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.584079 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron9e44-account-delete-z6nhv" event={"ID":"b8fe9fcd-194a-45e1-b1c3-08bdd5331810","Type":"ContainerDied","Data":"ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.587428 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder0800-account-delete-rglnm"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.593537 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicanb71d-account-delete-xt45s" event={"ID":"df614c68-4293-4c6c-a18e-e87dfcf8fe26","Type":"ContainerDied","Data":"54471da1bb3ebfe2932f3acd67b2d8d737ee6b8f0ccd91ff487b0ffa0ae23fbb"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.593628 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanb71d-account-delete-xt45s" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.596186 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnjv4\" (UniqueName: \"kubernetes.io/projected/18689556-e39f-4c5d-add8-aa934c468f1b-kube-api-access-nnjv4\") pod \"18689556-e39f-4c5d-add8-aa934c468f1b\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.596282 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-combined-ca-bundle\") pod \"18689556-e39f-4c5d-add8-aa934c468f1b\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.596488 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-config-data\") pod \"18689556-e39f-4c5d-add8-aa934c468f1b\" (UID: \"18689556-e39f-4c5d-add8-aa934c468f1b\") " Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.596897 5058 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.596924 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxdjk\" (UniqueName: \"kubernetes.io/projected/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-api-access-mxdjk\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.596937 5058 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.598975 5058 generic.go:334] "Generic (PLEG): container finished" podID="67eb8142-3269-43d9-a2bc-a6afbaa991f9" containerID="76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230" exitCode=2 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.599044 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.599073 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67eb8142-3269-43d9-a2bc-a6afbaa991f9","Type":"ContainerDied","Data":"76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.599100 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67eb8142-3269-43d9-a2bc-a6afbaa991f9","Type":"ContainerDied","Data":"8686a2eb0a55583dd3a52cfa20ae3a9eee2604f09db5d25817137d5a43be23dd"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.614462 5058 generic.go:334] "Generic (PLEG): container finished" podID="18689556-e39f-4c5d-add8-aa934c468f1b" containerID="a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68" exitCode=0 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.614594 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.614901 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18689556-e39f-4c5d-add8-aa934c468f1b","Type":"ContainerDied","Data":"a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.614953 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18689556-e39f-4c5d-add8-aa934c468f1b","Type":"ContainerDied","Data":"e86aab12d5fc019486f6a2be9d4c8d9e1c40145036bdcfc86c6a15f78a6f2b1a"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.615597 5058 scope.go:117] "RemoveContainer" containerID="ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.620015 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance3d16-account-delete-zwlpf" event={"ID":"1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409","Type":"ContainerDied","Data":"30ac7af819906fa5ff8bff2ceaf9f49f1bc0e4983d612d2e827bb46d71cd6df0"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.620041 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance3d16-account-delete-zwlpf" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.624024 5058 generic.go:334] "Generic (PLEG): container finished" podID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerID="6f88404a55530828d3f110f54a67b95cc6d0025e7691e4c253f22716380239e4" exitCode=0 Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.624074 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ae7394b-1174-4a56-96d5-7fe0598e1343","Type":"ContainerDied","Data":"6f88404a55530828d3f110f54a67b95cc6d0025e7691e4c253f22716380239e4"} Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.636444 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18689556-e39f-4c5d-add8-aa934c468f1b-kube-api-access-nnjv4" (OuterVolumeSpecName: "kube-api-access-nnjv4") pod "18689556-e39f-4c5d-add8-aa934c468f1b" (UID: "18689556-e39f-4c5d-add8-aa934c468f1b"). InnerVolumeSpecName "kube-api-access-nnjv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.640282 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67eb8142-3269-43d9-a2bc-a6afbaa991f9" (UID: "67eb8142-3269-43d9-a2bc-a6afbaa991f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.640361 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-config-data" (OuterVolumeSpecName: "config-data") pod "18689556-e39f-4c5d-add8-aa934c468f1b" (UID: "18689556-e39f-4c5d-add8-aa934c468f1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.641916 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18689556-e39f-4c5d-add8-aa934c468f1b" (UID: "18689556-e39f-4c5d-add8-aa934c468f1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.697519 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.697559 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb8142-3269-43d9-a2bc-a6afbaa991f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.697568 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18689556-e39f-4c5d-add8-aa934c468f1b-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.697577 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnjv4\" (UniqueName: \"kubernetes.io/projected/18689556-e39f-4c5d-add8-aa934c468f1b-kube-api-access-nnjv4\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.830418 5058 scope.go:117] "RemoveContainer" containerID="ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322" Oct 14 07:10:51 crc kubenswrapper[5058]: E1014 07:10:51.830896 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322\": container with ID starting with ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322 not found: ID does not exist" containerID="ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.830971 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322"} err="failed to get container status \"ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322\": rpc error: code = NotFound desc = could not find container \"ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322\": container with ID starting with ad3afedad14adff0001065ca6e79e9e43f243248d386a564449bc6f530b99322 not found: ID does not exist" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.831008 5058 scope.go:117] "RemoveContainer" containerID="2c928056e43e35e805ee93793ad01965db7b4f5371b64ad8b2405baadb1b7cb2" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.831324 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.850973 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicanb71d-account-delete-xt45s"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.861269 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbicanb71d-account-delete-xt45s"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.882752 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance3d16-account-delete-zwlpf"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.885320 5058 scope.go:117] "RemoveContainer" containerID="76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.887449 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance3d16-account-delete-zwlpf"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.893469 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron9e44-account-delete-z6nhv"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.896426 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron9e44-account-delete-z6nhv"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.940226 5058 scope.go:117] "RemoveContainer" containerID="76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230" Oct 14 07:10:51 crc kubenswrapper[5058]: E1014 07:10:51.941865 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230\": container with ID starting with 76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230 not found: ID does not exist" containerID="76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.941918 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230"} err="failed to get container status \"76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230\": rpc error: code = NotFound desc = could not find container \"76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230\": container with ID starting with 76e9ee35c758a8010e54a9ce949c0cd65c5e405290db5aca61ce80f47febb230 not found: ID does not exist" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.941944 5058 scope.go:117] "RemoveContainer" containerID="a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68" Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.956956 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.984930 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.991186 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:10:51 crc kubenswrapper[5058]: I1014 07:10:51.995885 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.013433 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-logs\") pod \"61117082-2145-4117-9de9-d283039a6d7d\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.013521 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-scripts\") pod \"61117082-2145-4117-9de9-d283039a6d7d\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.013561 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77lhj\" (UniqueName: \"kubernetes.io/projected/61117082-2145-4117-9de9-d283039a6d7d-kube-api-access-77lhj\") pod \"61117082-2145-4117-9de9-d283039a6d7d\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.013591 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-httpd-run\") pod \"61117082-2145-4117-9de9-d283039a6d7d\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.013614 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-combined-ca-bundle\") pod \"61117082-2145-4117-9de9-d283039a6d7d\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.013656 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-config-data\") pod \"61117082-2145-4117-9de9-d283039a6d7d\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.013675 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-internal-tls-certs\") pod \"61117082-2145-4117-9de9-d283039a6d7d\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.013817 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"61117082-2145-4117-9de9-d283039a6d7d\" (UID: \"61117082-2145-4117-9de9-d283039a6d7d\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.014755 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-logs" (OuterVolumeSpecName: "logs") pod "61117082-2145-4117-9de9-d283039a6d7d" (UID: "61117082-2145-4117-9de9-d283039a6d7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.017036 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "61117082-2145-4117-9de9-d283039a6d7d" (UID: "61117082-2145-4117-9de9-d283039a6d7d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.021048 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "61117082-2145-4117-9de9-d283039a6d7d" (UID: "61117082-2145-4117-9de9-d283039a6d7d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.033064 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-scripts" (OuterVolumeSpecName: "scripts") pod "61117082-2145-4117-9de9-d283039a6d7d" (UID: "61117082-2145-4117-9de9-d283039a6d7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.036995 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61117082-2145-4117-9de9-d283039a6d7d-kube-api-access-77lhj" (OuterVolumeSpecName: "kube-api-access-77lhj") pod "61117082-2145-4117-9de9-d283039a6d7d" (UID: "61117082-2145-4117-9de9-d283039a6d7d"). InnerVolumeSpecName "kube-api-access-77lhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.054906 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-96d5856f6-5t2b2" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:44508->10.217.0.166:9311: read: connection reset by peer" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.055071 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-96d5856f6-5t2b2" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:44504->10.217.0.166:9311: read: connection reset by peer" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.085742 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:39158->10.217.0.202:8775: read: connection reset by peer" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.085743 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:39150->10.217.0.202:8775: read: connection reset by peer" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.104001 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61117082-2145-4117-9de9-d283039a6d7d" (UID: "61117082-2145-4117-9de9-d283039a6d7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.115754 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.115815 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.115826 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.115835 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.115845 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77lhj\" (UniqueName: \"kubernetes.io/projected/61117082-2145-4117-9de9-d283039a6d7d-kube-api-access-77lhj\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.115854 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61117082-2145-4117-9de9-d283039a6d7d-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.120935 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "61117082-2145-4117-9de9-d283039a6d7d" (UID: "61117082-2145-4117-9de9-d283039a6d7d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.128043 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-config-data" (OuterVolumeSpecName: "config-data") pod "61117082-2145-4117-9de9-d283039a6d7d" (UID: "61117082-2145-4117-9de9-d283039a6d7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.136257 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 14 07:10:52 crc kubenswrapper[5058]: E1014 07:10:52.176752 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 07:10:52 crc kubenswrapper[5058]: E1014 07:10:52.178856 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 07:10:52 crc kubenswrapper[5058]: E1014 07:10:52.182714 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 07:10:52 crc kubenswrapper[5058]: E1014 07:10:52.184907 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="195c5812-232a-4b0e-9c5d-ba4b1af44126" containerName="nova-cell0-conductor-conductor" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.217701 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.217738 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.221266 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61117082-2145-4117-9de9-d283039a6d7d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.240926 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.247860 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.289232 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.289922 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.298188 5058 scope.go:117] "RemoveContainer" containerID="a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68" Oct 14 07:10:52 crc kubenswrapper[5058]: E1014 07:10:52.298629 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68\": container with ID starting with a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68 not found: ID does not exist" containerID="a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.298670 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68"} err="failed to get container status \"a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68\": rpc error: code = NotFound desc = could not find container \"a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68\": container with ID starting with a8a4b4314daf0c00a0cf8e25bd3064c48b6f98d474bb4ebc92603f137fe67b68 not found: ID does not exist" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.298698 5058 scope.go:117] "RemoveContainer" containerID="8d666c7d69d97f26522cf42522e4b969f3cf0766d22881eccdc50dade395b4d4" Oct 14 07:10:52 crc kubenswrapper[5058]: E1014 07:10:52.368292 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 14 07:10:52 crc kubenswrapper[5058]: E1014 07:10:52.415300 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 14 07:10:52 crc kubenswrapper[5058]: E1014 07:10:52.418977 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 14 07:10:52 crc kubenswrapper[5058]: E1014 07:10:52.419108 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="ovn-northd" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.423458 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-internal-tls-certs\") pod \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.423510 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-logs\") pod \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.423546 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-combined-ca-bundle\") pod \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.423572 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-public-tls-certs\") pod \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.423618 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-scripts\") pod \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.423644 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-config-data\") pod \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.423682 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data\") pod \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.423705 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-internal-tls-certs\") pod \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424592 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-internal-tls-certs\") pod \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424633 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-scripts\") pod \"0ae7394b-1174-4a56-96d5-7fe0598e1343\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424668 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-config-data\") pod \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424697 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-httpd-run\") pod \"0ae7394b-1174-4a56-96d5-7fe0598e1343\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424768 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-public-tls-certs\") pod \"0ae7394b-1174-4a56-96d5-7fe0598e1343\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424818 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e6c9e27-cbe7-4371-8c0c-614755871b4e-logs\") pod \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424845 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc792cd6-15f5-4ef1-a383-4cecacce0df3-logs\") pod \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424878 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data-custom\") pod \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424906 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlbsh\" (UniqueName: \"kubernetes.io/projected/cc792cd6-15f5-4ef1-a383-4cecacce0df3-kube-api-access-qlbsh\") pod \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.424983 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-combined-ca-bundle\") pod \"0ae7394b-1174-4a56-96d5-7fe0598e1343\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425027 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpwdx\" (UniqueName: \"kubernetes.io/projected/0ae7394b-1174-4a56-96d5-7fe0598e1343-kube-api-access-kpwdx\") pod \"0ae7394b-1174-4a56-96d5-7fe0598e1343\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425051 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf9gl\" (UniqueName: \"kubernetes.io/projected/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-kube-api-access-sf9gl\") pod \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425085 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-combined-ca-bundle\") pod \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425238 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-public-tls-certs\") pod \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425267 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc792cd6-15f5-4ef1-a383-4cecacce0df3-etc-machine-id\") pod \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425307 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-config-data\") pod \"0ae7394b-1174-4a56-96d5-7fe0598e1343\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425328 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-combined-ca-bundle\") pod \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425346 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0ae7394b-1174-4a56-96d5-7fe0598e1343\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425367 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-public-tls-certs\") pod \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\" (UID: \"cc792cd6-15f5-4ef1-a383-4cecacce0df3\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425384 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-logs\") pod \"0ae7394b-1174-4a56-96d5-7fe0598e1343\" (UID: \"0ae7394b-1174-4a56-96d5-7fe0598e1343\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425403 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-scripts\") pod \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\" (UID: \"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.425424 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q6r7\" (UniqueName: \"kubernetes.io/projected/8e6c9e27-cbe7-4371-8c0c-614755871b4e-kube-api-access-9q6r7\") pod \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\" (UID: \"8e6c9e27-cbe7-4371-8c0c-614755871b4e\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.426253 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-logs" (OuterVolumeSpecName: "logs") pod "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" (UID: "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.435281 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-logs" (OuterVolumeSpecName: "logs") pod "0ae7394b-1174-4a56-96d5-7fe0598e1343" (UID: "0ae7394b-1174-4a56-96d5-7fe0598e1343"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.432969 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6c9e27-cbe7-4371-8c0c-614755871b4e-kube-api-access-9q6r7" (OuterVolumeSpecName: "kube-api-access-9q6r7") pod "8e6c9e27-cbe7-4371-8c0c-614755871b4e" (UID: "8e6c9e27-cbe7-4371-8c0c-614755871b4e"). InnerVolumeSpecName "kube-api-access-9q6r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.448471 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0ae7394b-1174-4a56-96d5-7fe0598e1343" (UID: "0ae7394b-1174-4a56-96d5-7fe0598e1343"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.448554 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6c9e27-cbe7-4371-8c0c-614755871b4e-logs" (OuterVolumeSpecName: "logs") pod "8e6c9e27-cbe7-4371-8c0c-614755871b4e" (UID: "8e6c9e27-cbe7-4371-8c0c-614755871b4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.448865 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc792cd6-15f5-4ef1-a383-4cecacce0df3-logs" (OuterVolumeSpecName: "logs") pod "cc792cd6-15f5-4ef1-a383-4cecacce0df3" (UID: "cc792cd6-15f5-4ef1-a383-4cecacce0df3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.449364 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc792cd6-15f5-4ef1-a383-4cecacce0df3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cc792cd6-15f5-4ef1-a383-4cecacce0df3" (UID: "cc792cd6-15f5-4ef1-a383-4cecacce0df3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.449817 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae7394b-1174-4a56-96d5-7fe0598e1343-kube-api-access-kpwdx" (OuterVolumeSpecName: "kube-api-access-kpwdx") pod "0ae7394b-1174-4a56-96d5-7fe0598e1343" (UID: "0ae7394b-1174-4a56-96d5-7fe0598e1343"). InnerVolumeSpecName "kube-api-access-kpwdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.450032 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-scripts" (OuterVolumeSpecName: "scripts") pod "cc792cd6-15f5-4ef1-a383-4cecacce0df3" (UID: "cc792cd6-15f5-4ef1-a383-4cecacce0df3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.455187 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "0ae7394b-1174-4a56-96d5-7fe0598e1343" (UID: "0ae7394b-1174-4a56-96d5-7fe0598e1343"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.456635 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-kube-api-access-sf9gl" (OuterVolumeSpecName: "kube-api-access-sf9gl") pod "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" (UID: "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9"). InnerVolumeSpecName "kube-api-access-sf9gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.456959 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-scripts" (OuterVolumeSpecName: "scripts") pod "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" (UID: "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.487946 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cc792cd6-15f5-4ef1-a383-4cecacce0df3" (UID: "cc792cd6-15f5-4ef1-a383-4cecacce0df3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.489327 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc792cd6-15f5-4ef1-a383-4cecacce0df3-kube-api-access-qlbsh" (OuterVolumeSpecName: "kube-api-access-qlbsh") pod "cc792cd6-15f5-4ef1-a383-4cecacce0df3" (UID: "cc792cd6-15f5-4ef1-a383-4cecacce0df3"). InnerVolumeSpecName "kube-api-access-qlbsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.489712 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-scripts" (OuterVolumeSpecName: "scripts") pod "0ae7394b-1174-4a56-96d5-7fe0598e1343" (UID: "0ae7394b-1174-4a56-96d5-7fe0598e1343"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528174 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528216 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528230 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528247 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e6c9e27-cbe7-4371-8c0c-614755871b4e-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528257 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc792cd6-15f5-4ef1-a383-4cecacce0df3-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528268 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528279 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlbsh\" (UniqueName: \"kubernetes.io/projected/cc792cd6-15f5-4ef1-a383-4cecacce0df3-kube-api-access-qlbsh\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528288 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpwdx\" (UniqueName: \"kubernetes.io/projected/0ae7394b-1174-4a56-96d5-7fe0598e1343-kube-api-access-kpwdx\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528296 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf9gl\" (UniqueName: \"kubernetes.io/projected/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-kube-api-access-sf9gl\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528304 5058 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc792cd6-15f5-4ef1-a383-4cecacce0df3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528328 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528342 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ae7394b-1174-4a56-96d5-7fe0598e1343-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528355 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528365 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q6r7\" (UniqueName: \"kubernetes.io/projected/8e6c9e27-cbe7-4371-8c0c-614755871b4e-kube-api-access-9q6r7\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.528377 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.592062 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.618497 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.631992 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.650714 5058 generic.go:334] "Generic (PLEG): container finished" podID="c2af2c14-fb00-45d0-8414-8754189455a0" containerID="009dfda92aa7d854c85932f7c1982d950ec47b6a6ac6fe07c1206850d5895921" exitCode=0 Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.650776 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c2af2c14-fb00-45d0-8414-8754189455a0","Type":"ContainerDied","Data":"009dfda92aa7d854c85932f7c1982d950ec47b6a6ac6fe07c1206850d5895921"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.661048 5058 generic.go:334] "Generic (PLEG): container finished" podID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerID="53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a" exitCode=0 Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.661114 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-96d5856f6-5t2b2" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.661140 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-96d5856f6-5t2b2" event={"ID":"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c","Type":"ContainerDied","Data":"53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.661216 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-96d5856f6-5t2b2" event={"ID":"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c","Type":"ContainerDied","Data":"a62871ab34712c99303e554ea4f2853a720abad1b86e43c4121ac42ee1d75e33"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.661240 5058 scope.go:117] "RemoveContainer" containerID="53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.679324 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-config-data" (OuterVolumeSpecName: "config-data") pod "8e6c9e27-cbe7-4371-8c0c-614755871b4e" (UID: "8e6c9e27-cbe7-4371-8c0c-614755871b4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.683049 5058 generic.go:334] "Generic (PLEG): container finished" podID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerID="7e5c7b3f56c44a6ade0810b15f6d0c8395be9268c61c135c78ce10faffa09108" exitCode=0 Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.683110 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e04a9c0-9542-49e8-b185-87a9ab267e7a","Type":"ContainerDied","Data":"7e5c7b3f56c44a6ade0810b15f6d0c8395be9268c61c135c78ce10faffa09108"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.706160 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc792cd6-15f5-4ef1-a383-4cecacce0df3" (UID: "cc792cd6-15f5-4ef1-a383-4cecacce0df3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.710068 5058 generic.go:334] "Generic (PLEG): container finished" podID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerID="bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf" exitCode=0 Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.710116 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e6c9e27-cbe7-4371-8c0c-614755871b4e","Type":"ContainerDied","Data":"bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.710138 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e6c9e27-cbe7-4371-8c0c-614755871b4e","Type":"ContainerDied","Data":"6979b8b041ed154edf6df00e2cda097cf01017bcca95937bbfbabbcfd657c554"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.710189 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.725203 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" (UID: "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.735277 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"61117082-2145-4117-9de9-d283039a6d7d","Type":"ContainerDied","Data":"b21b34e3ed034e80d1b5555bc955e6172e57b302e09950a381fb8622c972d702"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.735380 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.736734 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-combined-ca-bundle\") pod \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.736774 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-internal-tls-certs\") pod \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.736839 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-logs\") pod \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.737016 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data\") pod \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.737034 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data-custom\") pod \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.737072 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-public-tls-certs\") pod \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.737108 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfjxv\" (UniqueName: \"kubernetes.io/projected/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-kube-api-access-xfjxv\") pod \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\" (UID: \"b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c\") " Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.737508 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-logs" (OuterVolumeSpecName: "logs") pod "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" (UID: "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.738921 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.738937 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.738945 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.738953 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.743448 5058 generic.go:334] "Generic (PLEG): container finished" podID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerID="9b3da1831fb76f980056107a8a4364388de6c4f35dd0467a86ca43e04eafed44" exitCode=0 Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.743506 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76589ffb55-8wb6q" event={"ID":"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27","Type":"ContainerDied","Data":"9b3da1831fb76f980056107a8a4364388de6c4f35dd0467a86ca43e04eafed44"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.747444 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc792cd6-15f5-4ef1-a383-4cecacce0df3","Type":"ContainerDied","Data":"75870cd7693993bb01c43b953b219f4d91888762e28052bb43caf791ab31533f"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.747588 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.763074 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" (UID: "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.765453 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-kube-api-access-xfjxv" (OuterVolumeSpecName: "kube-api-access-xfjxv") pod "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" (UID: "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c"). InnerVolumeSpecName "kube-api-access-xfjxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.766325 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c888fb598-6htl5" event={"ID":"3afd2a77-5f33-4fed-8b1d-5ebc565a24e9","Type":"ContainerDied","Data":"226fdbc0aa3248fef52f34a0b7a3730ac8f28b94d4de423220a71b7d5137a478"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.766419 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c888fb598-6htl5" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.775323 5058 generic.go:334] "Generic (PLEG): container finished" podID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerID="3f1df96bec469d5c2518c3516461fee935342a6eb2f6de0d71a00e3cd6a36abf" exitCode=0 Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.775366 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" event={"ID":"ef71f1de-0039-4574-955a-ce760eb7ea3e","Type":"ContainerDied","Data":"3f1df96bec469d5c2518c3516461fee935342a6eb2f6de0d71a00e3cd6a36abf"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.781097 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ae7394b-1174-4a56-96d5-7fe0598e1343","Type":"ContainerDied","Data":"a5e9dba2743e91483ac79b661230c7330917a64b1d06bb5718e40319b2cb03b3"} Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.781402 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.803984 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" path="/var/lib/kubelet/pods/0820570e-22cb-4f0b-9ee4-f5237ccdfdff/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.805125 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133d4cdf-58ed-4544-8f05-328587a2b701" path="/var/lib/kubelet/pods/133d4cdf-58ed-4544-8f05-328587a2b701/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.805738 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18689556-e39f-4c5d-add8-aa934c468f1b" path="/var/lib/kubelet/pods/18689556-e39f-4c5d-add8-aa934c468f1b/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.806734 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409" path="/var/lib/kubelet/pods/1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.807274 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6771f490-1ad7-4f61-98a0-d52df29ef7c8" path="/var/lib/kubelet/pods/6771f490-1ad7-4f61-98a0-d52df29ef7c8/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.807760 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67eb8142-3269-43d9-a2bc-a6afbaa991f9" path="/var/lib/kubelet/pods/67eb8142-3269-43d9-a2bc-a6afbaa991f9/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.808647 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ebf4e1-4257-4a45-a776-734a016d955f" path="/var/lib/kubelet/pods/78ebf4e1-4257-4a45-a776-734a016d955f/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.809204 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a92fd66-33d0-4026-9f89-4fe1b7a57478" path="/var/lib/kubelet/pods/9a92fd66-33d0-4026-9f89-4fe1b7a57478/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.809627 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8fe9fcd-194a-45e1-b1c3-08bdd5331810" path="/var/lib/kubelet/pods/b8fe9fcd-194a-45e1-b1c3-08bdd5331810/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.810050 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df614c68-4293-4c6c-a18e-e87dfcf8fe26" path="/var/lib/kubelet/pods/df614c68-4293-4c6c-a18e-e87dfcf8fe26/volumes" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.812945 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" (UID: "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.813327 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e6c9e27-cbe7-4371-8c0c-614755871b4e" (UID: "8e6c9e27-cbe7-4371-8c0c-614755871b4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.833486 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ae7394b-1174-4a56-96d5-7fe0598e1343" (UID: "0ae7394b-1174-4a56-96d5-7fe0598e1343"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.833490 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cc792cd6-15f5-4ef1-a383-4cecacce0df3" (UID: "cc792cd6-15f5-4ef1-a383-4cecacce0df3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.843009 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfjxv\" (UniqueName: \"kubernetes.io/projected/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-kube-api-access-xfjxv\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.843037 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.843047 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.843075 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.843085 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.843093 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.924874 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" (UID: "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.928446 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0ae7394b-1174-4a56-96d5-7fe0598e1343" (UID: "0ae7394b-1174-4a56-96d5-7fe0598e1343"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.940978 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8e6c9e27-cbe7-4371-8c0c-614755871b4e" (UID: "8e6c9e27-cbe7-4371-8c0c-614755871b4e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.944398 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.944420 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.944429 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.949112 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data" (OuterVolumeSpecName: "config-data") pod "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" (UID: "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.955104 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" (UID: "b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.975278 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-config-data" (OuterVolumeSpecName: "config-data") pod "0ae7394b-1174-4a56-96d5-7fe0598e1343" (UID: "0ae7394b-1174-4a56-96d5-7fe0598e1343"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.976066 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-config-data" (OuterVolumeSpecName: "config-data") pod "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" (UID: "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.976334 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data" (OuterVolumeSpecName: "config-data") pod "cc792cd6-15f5-4ef1-a383-4cecacce0df3" (UID: "cc792cd6-15f5-4ef1-a383-4cecacce0df3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.985238 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" (UID: "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.985774 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" (UID: "3afd2a77-5f33-4fed-8b1d-5ebc565a24e9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:52 crc kubenswrapper[5058]: I1014 07:10:52.993269 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cc792cd6-15f5-4ef1-a383-4cecacce0df3" (UID: "cc792cd6-15f5-4ef1-a383-4cecacce0df3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.006477 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8e6c9e27-cbe7-4371-8c0c-614755871b4e" (UID: "8e6c9e27-cbe7-4371-8c0c-614755871b4e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.045854 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.045882 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6c9e27-cbe7-4371-8c0c-614755871b4e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.045892 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.045902 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.045910 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.045919 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.045930 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae7394b-1174-4a56-96d5-7fe0598e1343-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.045940 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.045950 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc792cd6-15f5-4ef1-a383-4cecacce0df3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.223480 5058 scope.go:117] "RemoveContainer" containerID="3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.225422 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.308746 5058 scope.go:117] "RemoveContainer" containerID="53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.309059 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 07:10:53 crc kubenswrapper[5058]: E1014 07:10:53.309881 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a\": container with ID starting with 53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a not found: ID does not exist" containerID="53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.309913 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a"} err="failed to get container status \"53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a\": rpc error: code = NotFound desc = could not find container \"53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a\": container with ID starting with 53a0f5e79fb70d590d7aae22044fe0aceb63bd34f394bca3c143f09e18c7674a not found: ID does not exist" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.309939 5058 scope.go:117] "RemoveContainer" containerID="3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c" Oct 14 07:10:53 crc kubenswrapper[5058]: E1014 07:10:53.310253 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c\": container with ID starting with 3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c not found: ID does not exist" containerID="3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.310288 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c"} err="failed to get container status \"3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c\": rpc error: code = NotFound desc = could not find container \"3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c\": container with ID starting with 3946ae6ac1424459b5b69df416c80ffa5cbebfb2a339c73bfeda33b305a8383c not found: ID does not exist" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.310305 5058 scope.go:117] "RemoveContainer" containerID="bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.335767 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.353271 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-config-data\") pod \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.353304 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s8kp\" (UniqueName: \"kubernetes.io/projected/2e04a9c0-9542-49e8-b185-87a9ab267e7a-kube-api-access-4s8kp\") pod \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.353394 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e04a9c0-9542-49e8-b185-87a9ab267e7a-logs\") pod \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.353427 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-nova-metadata-tls-certs\") pod \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.353462 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-combined-ca-bundle\") pod \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\" (UID: \"2e04a9c0-9542-49e8-b185-87a9ab267e7a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.357610 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e04a9c0-9542-49e8-b185-87a9ab267e7a-kube-api-access-4s8kp" (OuterVolumeSpecName: "kube-api-access-4s8kp") pod "2e04a9c0-9542-49e8-b185-87a9ab267e7a" (UID: "2e04a9c0-9542-49e8-b185-87a9ab267e7a"). InnerVolumeSpecName "kube-api-access-4s8kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.363705 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e04a9c0-9542-49e8-b185-87a9ab267e7a-logs" (OuterVolumeSpecName: "logs") pod "2e04a9c0-9542-49e8-b185-87a9ab267e7a" (UID: "2e04a9c0-9542-49e8-b185-87a9ab267e7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.365502 5058 scope.go:117] "RemoveContainer" containerID="90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.368232 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.381083 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e04a9c0-9542-49e8-b185-87a9ab267e7a" (UID: "2e04a9c0-9542-49e8-b185-87a9ab267e7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.387165 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.392123 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.396010 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.396352 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.396789 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-config-data" (OuterVolumeSpecName: "config-data") pod "2e04a9c0-9542-49e8-b185-87a9ab267e7a" (UID: "2e04a9c0-9542-49e8-b185-87a9ab267e7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.398702 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.401569 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-96d5856f6-5t2b2"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.408452 5058 scope.go:117] "RemoveContainer" containerID="bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.408591 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-96d5856f6-5t2b2"] Oct 14 07:10:53 crc kubenswrapper[5058]: E1014 07:10:53.409686 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf\": container with ID starting with bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf not found: ID does not exist" containerID="bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.409813 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf"} err="failed to get container status \"bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf\": rpc error: code = NotFound desc = could not find container \"bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf\": container with ID starting with bc6dc140f7c7de578ab75906c104909caa6bc424284f213dacd18f58cfccf1cf not found: ID does not exist" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.409836 5058 scope.go:117] "RemoveContainer" containerID="90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b" Oct 14 07:10:53 crc kubenswrapper[5058]: E1014 07:10:53.410071 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b\": container with ID starting with 90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b not found: ID does not exist" containerID="90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.410092 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b"} err="failed to get container status \"90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b\": rpc error: code = NotFound desc = could not find container \"90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b\": container with ID starting with 90d999de41599c61b1967e7c63eac58be2f35ea8bb826bf5762096f5089cc54b not found: ID does not exist" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.410113 5058 scope.go:117] "RemoveContainer" containerID="ddf55f3c2bbe35fdc2ab51be906a947fa2e2fdff71a5c9edee6623ab01efa4ca" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.422013 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.429212 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.440637 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.446527 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.451460 5058 scope.go:117] "RemoveContainer" containerID="3dd806326c4ebc88f937e535e0184dd39275bf7daaa1cdd5ccba019323c49240" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.452099 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.459495 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.469162 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2e04a9c0-9542-49e8-b185-87a9ab267e7a" (UID: "2e04a9c0-9542-49e8-b185-87a9ab267e7a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.477668 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d3d37c5f-3d6c-4f84-a681-b4bd9dffb466/ovn-northd/0.log" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.477750 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.485620 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9tnz\" (UniqueName: \"kubernetes.io/projected/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-kube-api-access-z9tnz\") pod \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.485672 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data\") pod \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.485699 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-combined-ca-bundle\") pod \"c2af2c14-fb00-45d0-8414-8754189455a0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.485754 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data-custom\") pod \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.485808 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-kolla-config\") pod \"c2af2c14-fb00-45d0-8414-8754189455a0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.485847 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-memcached-tls-certs\") pod \"c2af2c14-fb00-45d0-8414-8754189455a0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.485879 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-config-data\") pod \"c2af2c14-fb00-45d0-8414-8754189455a0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.485961 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc2c7\" (UniqueName: \"kubernetes.io/projected/c2af2c14-fb00-45d0-8414-8754189455a0-kube-api-access-rc2c7\") pod \"c2af2c14-fb00-45d0-8414-8754189455a0\" (UID: \"c2af2c14-fb00-45d0-8414-8754189455a0\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.485981 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-combined-ca-bundle\") pod \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.486017 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-logs\") pod \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\" (UID: \"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.486677 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-logs" (OuterVolumeSpecName: "logs") pod "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" (UID: "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.487417 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c888fb598-6htl5"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.488837 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c2af2c14-fb00-45d0-8414-8754189455a0" (UID: "c2af2c14-fb00-45d0-8414-8754189455a0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.492445 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-config-data" (OuterVolumeSpecName: "config-data") pod "c2af2c14-fb00-45d0-8414-8754189455a0" (UID: "c2af2c14-fb00-45d0-8414-8754189455a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.492943 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-kube-api-access-z9tnz" (OuterVolumeSpecName: "kube-api-access-z9tnz") pod "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" (UID: "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27"). InnerVolumeSpecName "kube-api-access-z9tnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.493049 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.493116 5058 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.493188 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.493254 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.493329 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e04a9c0-9542-49e8-b185-87a9ab267e7a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.493399 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s8kp\" (UniqueName: \"kubernetes.io/projected/2e04a9c0-9542-49e8-b185-87a9ab267e7a-kube-api-access-4s8kp\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.493482 5058 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2af2c14-fb00-45d0-8414-8754189455a0-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.493562 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e04a9c0-9542-49e8-b185-87a9ab267e7a-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.496450 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5c888fb598-6htl5"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.498639 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" (UID: "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.501208 5058 scope.go:117] "RemoveContainer" containerID="f5d187c11c0f467f10e4343a5baa9bc8683d7c0ce7601c7a7a218b64e989e1ed" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.523978 5058 scope.go:117] "RemoveContainer" containerID="75cd20c0073237cb2ae10b086a89b60f8da343d852dbb0faa5aaa253f9871fe4" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.529149 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2af2c14-fb00-45d0-8414-8754189455a0-kube-api-access-rc2c7" (OuterVolumeSpecName: "kube-api-access-rc2c7") pod "c2af2c14-fb00-45d0-8414-8754189455a0" (UID: "c2af2c14-fb00-45d0-8414-8754189455a0"). InnerVolumeSpecName "kube-api-access-rc2c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.536087 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" (UID: "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.554411 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "c2af2c14-fb00-45d0-8414-8754189455a0" (UID: "c2af2c14-fb00-45d0-8414-8754189455a0"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.560713 5058 scope.go:117] "RemoveContainer" containerID="3f22606015530d5cd831d677840b6de4c02774150161b7a98b4b65f28323dbc7" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.568452 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2af2c14-fb00-45d0-8414-8754189455a0" (UID: "c2af2c14-fb00-45d0-8414-8754189455a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.580930 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data" (OuterVolumeSpecName: "config-data") pod "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" (UID: "c9ccef3b-4aab-4a9f-acce-5a26c3f00b27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.588437 5058 scope.go:117] "RemoveContainer" containerID="6445c8e5022085e59b4cf30be5b9f660e7e4d84d81a5dc48c1a2c637c48a4414" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594358 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-combined-ca-bundle\") pod \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594397 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-config-data\") pod \"195c5812-232a-4b0e-9c5d-ba4b1af44126\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594422 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-combined-ca-bundle\") pod \"ef71f1de-0039-4574-955a-ce760eb7ea3e\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594470 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-kolla-config\") pod \"ca7a9685-6d40-487b-aebf-f0a01ace044b\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594488 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-scripts\") pod \"42219495-c664-4b9f-a5a7-f66b0bea105a\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594524 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-sg-core-conf-yaml\") pod \"42219495-c664-4b9f-a5a7-f66b0bea105a\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594550 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-combined-ca-bundle\") pod \"195c5812-232a-4b0e-9c5d-ba4b1af44126\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594574 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-generated\") pod \"ca7a9685-6d40-487b-aebf-f0a01ace044b\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594593 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvlsk\" (UniqueName: \"kubernetes.io/projected/ef71f1de-0039-4574-955a-ce760eb7ea3e-kube-api-access-pvlsk\") pod \"ef71f1de-0039-4574-955a-ce760eb7ea3e\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594612 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-metrics-certs-tls-certs\") pod \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594650 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-config-data\") pod \"42219495-c664-4b9f-a5a7-f66b0bea105a\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594672 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cb85\" (UniqueName: \"kubernetes.io/projected/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-kube-api-access-8cb85\") pod \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594691 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-combined-ca-bundle\") pod \"42219495-c664-4b9f-a5a7-f66b0bea105a\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.594709 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-scripts\") pod \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595073 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ca7a9685-6d40-487b-aebf-f0a01ace044b" (UID: "ca7a9685-6d40-487b-aebf-f0a01ace044b"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595156 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data\") pod \"ef71f1de-0039-4574-955a-ce760eb7ea3e\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595179 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-default\") pod \"ca7a9685-6d40-487b-aebf-f0a01ace044b\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595218 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-combined-ca-bundle\") pod \"ca7a9685-6d40-487b-aebf-f0a01ace044b\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595240 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmkcv\" (UniqueName: \"kubernetes.io/projected/195c5812-232a-4b0e-9c5d-ba4b1af44126-kube-api-access-gmkcv\") pod \"195c5812-232a-4b0e-9c5d-ba4b1af44126\" (UID: \"195c5812-232a-4b0e-9c5d-ba4b1af44126\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595261 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-run-httpd\") pod \"42219495-c664-4b9f-a5a7-f66b0bea105a\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595291 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-northd-tls-certs\") pod \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595330 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b4hp\" (UniqueName: \"kubernetes.io/projected/42219495-c664-4b9f-a5a7-f66b0bea105a-kube-api-access-7b4hp\") pod \"42219495-c664-4b9f-a5a7-f66b0bea105a\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595360 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-galera-tls-certs\") pod \"ca7a9685-6d40-487b-aebf-f0a01ace044b\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595389 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-operator-scripts\") pod \"ca7a9685-6d40-487b-aebf-f0a01ace044b\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595418 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-log-httpd\") pod \"42219495-c664-4b9f-a5a7-f66b0bea105a\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595439 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-ceilometer-tls-certs\") pod \"42219495-c664-4b9f-a5a7-f66b0bea105a\" (UID: \"42219495-c664-4b9f-a5a7-f66b0bea105a\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595456 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ca7a9685-6d40-487b-aebf-f0a01ace044b\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595470 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-config\") pod \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595490 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data-custom\") pod \"ef71f1de-0039-4574-955a-ce760eb7ea3e\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595515 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef71f1de-0039-4574-955a-ce760eb7ea3e-logs\") pod \"ef71f1de-0039-4574-955a-ce760eb7ea3e\" (UID: \"ef71f1de-0039-4574-955a-ce760eb7ea3e\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595536 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2rcv\" (UniqueName: \"kubernetes.io/projected/ca7a9685-6d40-487b-aebf-f0a01ace044b-kube-api-access-l2rcv\") pod \"ca7a9685-6d40-487b-aebf-f0a01ace044b\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595565 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-rundir\") pod \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\" (UID: \"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.595583 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-secrets\") pod \"ca7a9685-6d40-487b-aebf-f0a01ace044b\" (UID: \"ca7a9685-6d40-487b-aebf-f0a01ace044b\") " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.596575 5058 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.596593 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9tnz\" (UniqueName: \"kubernetes.io/projected/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-kube-api-access-z9tnz\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.596634 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.596648 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.596656 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.596665 5058 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2af2c14-fb00-45d0-8414-8754189455a0-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.596674 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.596683 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc2c7\" (UniqueName: \"kubernetes.io/projected/c2af2c14-fb00-45d0-8414-8754189455a0-kube-api-access-rc2c7\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.597710 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" (UID: "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.598385 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "42219495-c664-4b9f-a5a7-f66b0bea105a" (UID: "42219495-c664-4b9f-a5a7-f66b0bea105a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.598684 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "ca7a9685-6d40-487b-aebf-f0a01ace044b" (UID: "ca7a9685-6d40-487b-aebf-f0a01ace044b"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.598741 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-scripts" (OuterVolumeSpecName: "scripts") pod "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" (UID: "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.599266 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "ca7a9685-6d40-487b-aebf-f0a01ace044b" (UID: "ca7a9685-6d40-487b-aebf-f0a01ace044b"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.599952 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-config" (OuterVolumeSpecName: "config") pod "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" (UID: "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.602643 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca7a9685-6d40-487b-aebf-f0a01ace044b" (UID: "ca7a9685-6d40-487b-aebf-f0a01ace044b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.605339 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef71f1de-0039-4574-955a-ce760eb7ea3e-logs" (OuterVolumeSpecName: "logs") pod "ef71f1de-0039-4574-955a-ce760eb7ea3e" (UID: "ef71f1de-0039-4574-955a-ce760eb7ea3e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.605541 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-scripts" (OuterVolumeSpecName: "scripts") pod "42219495-c664-4b9f-a5a7-f66b0bea105a" (UID: "42219495-c664-4b9f-a5a7-f66b0bea105a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.606080 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "42219495-c664-4b9f-a5a7-f66b0bea105a" (UID: "42219495-c664-4b9f-a5a7-f66b0bea105a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.608655 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-secrets" (OuterVolumeSpecName: "secrets") pod "ca7a9685-6d40-487b-aebf-f0a01ace044b" (UID: "ca7a9685-6d40-487b-aebf-f0a01ace044b"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.609546 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42219495-c664-4b9f-a5a7-f66b0bea105a-kube-api-access-7b4hp" (OuterVolumeSpecName: "kube-api-access-7b4hp") pod "42219495-c664-4b9f-a5a7-f66b0bea105a" (UID: "42219495-c664-4b9f-a5a7-f66b0bea105a"). InnerVolumeSpecName "kube-api-access-7b4hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.611661 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-kube-api-access-8cb85" (OuterVolumeSpecName: "kube-api-access-8cb85") pod "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" (UID: "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466"). InnerVolumeSpecName "kube-api-access-8cb85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.622474 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195c5812-232a-4b0e-9c5d-ba4b1af44126-kube-api-access-gmkcv" (OuterVolumeSpecName: "kube-api-access-gmkcv") pod "195c5812-232a-4b0e-9c5d-ba4b1af44126" (UID: "195c5812-232a-4b0e-9c5d-ba4b1af44126"). InnerVolumeSpecName "kube-api-access-gmkcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.623714 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef71f1de-0039-4574-955a-ce760eb7ea3e-kube-api-access-pvlsk" (OuterVolumeSpecName: "kube-api-access-pvlsk") pod "ef71f1de-0039-4574-955a-ce760eb7ea3e" (UID: "ef71f1de-0039-4574-955a-ce760eb7ea3e"). InnerVolumeSpecName "kube-api-access-pvlsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.624933 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ef71f1de-0039-4574-955a-ce760eb7ea3e" (UID: "ef71f1de-0039-4574-955a-ce760eb7ea3e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.640711 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef71f1de-0039-4574-955a-ce760eb7ea3e" (UID: "ef71f1de-0039-4574-955a-ce760eb7ea3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.642928 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca7a9685-6d40-487b-aebf-f0a01ace044b-kube-api-access-l2rcv" (OuterVolumeSpecName: "kube-api-access-l2rcv") pod "ca7a9685-6d40-487b-aebf-f0a01ace044b" (UID: "ca7a9685-6d40-487b-aebf-f0a01ace044b"). InnerVolumeSpecName "kube-api-access-l2rcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.645519 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" (UID: "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.654375 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "ca7a9685-6d40-487b-aebf-f0a01ace044b" (UID: "ca7a9685-6d40-487b-aebf-f0a01ace044b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.663934 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "42219495-c664-4b9f-a5a7-f66b0bea105a" (UID: "42219495-c664-4b9f-a5a7-f66b0bea105a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.665981 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-config-data" (OuterVolumeSpecName: "config-data") pod "195c5812-232a-4b0e-9c5d-ba4b1af44126" (UID: "195c5812-232a-4b0e-9c5d-ba4b1af44126"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.697899 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-generated\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.697930 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvlsk\" (UniqueName: \"kubernetes.io/projected/ef71f1de-0039-4574-955a-ce760eb7ea3e-kube-api-access-pvlsk\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.697939 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cb85\" (UniqueName: \"kubernetes.io/projected/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-kube-api-access-8cb85\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.697947 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.697991 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-config-data-default\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698000 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmkcv\" (UniqueName: \"kubernetes.io/projected/195c5812-232a-4b0e-9c5d-ba4b1af44126-kube-api-access-gmkcv\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698008 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698019 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b4hp\" (UniqueName: \"kubernetes.io/projected/42219495-c664-4b9f-a5a7-f66b0bea105a-kube-api-access-7b4hp\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698032 5058 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca7a9685-6d40-487b-aebf-f0a01ace044b-operator-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698042 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42219495-c664-4b9f-a5a7-f66b0bea105a-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698067 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698077 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698085 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698093 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef71f1de-0039-4574-955a-ce760eb7ea3e-logs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698102 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2rcv\" (UniqueName: \"kubernetes.io/projected/ca7a9685-6d40-487b-aebf-f0a01ace044b-kube-api-access-l2rcv\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698112 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-rundir\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698122 5058 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698132 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698143 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698155 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698181 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.698193 5058 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.701858 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca7a9685-6d40-487b-aebf-f0a01ace044b" (UID: "ca7a9685-6d40-487b-aebf-f0a01ace044b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.706482 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "195c5812-232a-4b0e-9c5d-ba4b1af44126" (UID: "195c5812-232a-4b0e-9c5d-ba4b1af44126"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.723206 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data" (OuterVolumeSpecName: "config-data") pod "ef71f1de-0039-4574-955a-ce760eb7ea3e" (UID: "ef71f1de-0039-4574-955a-ce760eb7ea3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.731962 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.742862 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "ca7a9685-6d40-487b-aebf-f0a01ace044b" (UID: "ca7a9685-6d40-487b-aebf-f0a01ace044b"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.750768 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42219495-c664-4b9f-a5a7-f66b0bea105a" (UID: "42219495-c664-4b9f-a5a7-f66b0bea105a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.754001 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "42219495-c664-4b9f-a5a7-f66b0bea105a" (UID: "42219495-c664-4b9f-a5a7-f66b0bea105a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.763870 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" (UID: "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.774723 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" (UID: "d3d37c5f-3d6c-4f84-a681-b4bd9dffb466"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.786921 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-config-data" (OuterVolumeSpecName: "config-data") pod "42219495-c664-4b9f-a5a7-f66b0bea105a" (UID: "42219495-c664-4b9f-a5a7-f66b0bea105a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800045 5058 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800077 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800088 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800100 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef71f1de-0039-4574-955a-ce760eb7ea3e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800112 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800124 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800134 5058 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca7a9685-6d40-487b-aebf-f0a01ace044b-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800143 5058 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42219495-c664-4b9f-a5a7-f66b0bea105a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800153 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.800163 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195c5812-232a-4b0e-9c5d-ba4b1af44126-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.807604 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d3d37c5f-3d6c-4f84-a681-b4bd9dffb466/ovn-northd/0.log" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.807653 5058 generic.go:334] "Generic (PLEG): container finished" podID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" exitCode=139 Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.807711 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466","Type":"ContainerDied","Data":"3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.807741 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3d37c5f-3d6c-4f84-a681-b4bd9dffb466","Type":"ContainerDied","Data":"a4ec591bdacb9729960b42f0f5a07978a7979936bd95702391a3c68836852f37"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.807831 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.820781 5058 generic.go:334] "Generic (PLEG): container finished" podID="ca7a9685-6d40-487b-aebf-f0a01ace044b" containerID="f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2" exitCode=0 Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.820987 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca7a9685-6d40-487b-aebf-f0a01ace044b","Type":"ContainerDied","Data":"f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.821042 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca7a9685-6d40-487b-aebf-f0a01ace044b","Type":"ContainerDied","Data":"978f4d2e7e004e78ae23e9f1eadde34cfc0fb72764091ba6730d46602cf48bd9"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.821187 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.827597 5058 generic.go:334] "Generic (PLEG): container finished" podID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerID="13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09" exitCode=0 Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.827671 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerDied","Data":"13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.827704 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42219495-c664-4b9f-a5a7-f66b0bea105a","Type":"ContainerDied","Data":"a29da43231afc29a7824d7f94710dc2bbc75e4c4e8c6108dd9e4063cd8abcd29"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.827818 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.833075 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76589ffb55-8wb6q" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.834088 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76589ffb55-8wb6q" event={"ID":"c9ccef3b-4aab-4a9f-acce-5a26c3f00b27","Type":"ContainerDied","Data":"d5a7cac8ab393d5b51b1667ad2a872c78c0dea2f4a2a10366886f45852b6b6b7"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.841990 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" event={"ID":"ef71f1de-0039-4574-955a-ce760eb7ea3e","Type":"ContainerDied","Data":"8c645d4039f968d2d008a05dd294a7202643f7f844429ba9fed91241010ec0f7"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.842100 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.846285 5058 generic.go:334] "Generic (PLEG): container finished" podID="195c5812-232a-4b0e-9c5d-ba4b1af44126" containerID="1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec" exitCode=0 Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.846357 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"195c5812-232a-4b0e-9c5d-ba4b1af44126","Type":"ContainerDied","Data":"1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.846380 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"195c5812-232a-4b0e-9c5d-ba4b1af44126","Type":"ContainerDied","Data":"a122f9a61d0653ac3a94a42e2128d07265ba88187826e9fd9beda152d8584b86"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.846483 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.853151 5058 scope.go:117] "RemoveContainer" containerID="6f88404a55530828d3f110f54a67b95cc6d0025e7691e4c253f22716380239e4" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.854373 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.854843 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e04a9c0-9542-49e8-b185-87a9ab267e7a","Type":"ContainerDied","Data":"acfe4ed47b6c12938ef0c19a0ab7489749d622e5fcb77515c43da88dff94012d"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.860149 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c2af2c14-fb00-45d0-8414-8754189455a0","Type":"ContainerDied","Data":"cddbfd68bb2ac47b37e2ace5a28bc2e67d80087cbaacc2ec5c404da47ed4344e"} Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.860240 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.959234 5058 scope.go:117] "RemoveContainer" containerID="a1c6b82cefdf4862cb066b04eb626d0455c0c4fe043784013c88eec253145cfa" Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.961363 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.969565 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.980918 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 07:10:53 crc kubenswrapper[5058]: I1014 07:10:53.985672 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.004121 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.010564 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.017062 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.025743 5058 scope.go:117] "RemoveContainer" containerID="33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.026006 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.033179 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-76589ffb55-8wb6q"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.038960 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-76589ffb55-8wb6q"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.044422 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.054841 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-6dd6ff5ccd-fdrls"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.059692 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.064020 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.068448 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.070885 5058 scope.go:117] "RemoveContainer" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.072360 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.089133 5058 scope.go:117] "RemoveContainer" containerID="33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.089597 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d\": container with ID starting with 33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d not found: ID does not exist" containerID="33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.089636 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d"} err="failed to get container status \"33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d\": rpc error: code = NotFound desc = could not find container \"33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d\": container with ID starting with 33e5a447f822a9de28dbeb95d76e8c87f47746e59d8df59b197a883787ccd30d not found: ID does not exist" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.089662 5058 scope.go:117] "RemoveContainer" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.090011 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437\": container with ID starting with 3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437 not found: ID does not exist" containerID="3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.090030 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437"} err="failed to get container status \"3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437\": rpc error: code = NotFound desc = could not find container \"3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437\": container with ID starting with 3ded43b690fc695a567be97bf2cc42e45f604b13e946e16b4127f8fe3caea437 not found: ID does not exist" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.090042 5058 scope.go:117] "RemoveContainer" containerID="f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.108984 5058 scope.go:117] "RemoveContainer" containerID="92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.138545 5058 scope.go:117] "RemoveContainer" containerID="f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.138876 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2\": container with ID starting with f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2 not found: ID does not exist" containerID="f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.138903 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2"} err="failed to get container status \"f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2\": rpc error: code = NotFound desc = could not find container \"f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2\": container with ID starting with f833e99fc1b98c67ab19d7236c2e6f712beeee5896dbe2f2c65eb578536e98b2 not found: ID does not exist" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.138923 5058 scope.go:117] "RemoveContainer" containerID="92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.139129 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d\": container with ID starting with 92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d not found: ID does not exist" containerID="92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.139151 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d"} err="failed to get container status \"92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d\": rpc error: code = NotFound desc = could not find container \"92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d\": container with ID starting with 92ec783c42ea5186054ee297cfc170bb9ad0fa4f8386822487705724b0abc01d not found: ID does not exist" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.139167 5058 scope.go:117] "RemoveContainer" containerID="45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.161460 5058 scope.go:117] "RemoveContainer" containerID="6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.178721 5058 scope.go:117] "RemoveContainer" containerID="13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.202251 5058 scope.go:117] "RemoveContainer" containerID="133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.227348 5058 scope.go:117] "RemoveContainer" containerID="45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.227785 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190\": container with ID starting with 45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190 not found: ID does not exist" containerID="45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.227889 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190"} err="failed to get container status \"45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190\": rpc error: code = NotFound desc = could not find container \"45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190\": container with ID starting with 45747cc0425ed044fab611b336930fbfdb91d62d8dfc372d0bff555b4507b190 not found: ID does not exist" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.227908 5058 scope.go:117] "RemoveContainer" containerID="6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.229016 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c\": container with ID starting with 6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c not found: ID does not exist" containerID="6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.229039 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c"} err="failed to get container status \"6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c\": rpc error: code = NotFound desc = could not find container \"6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c\": container with ID starting with 6f4ab83055306a812351f3116ba09ecd5a412fb95431c58a50db1b3f1018ea1c not found: ID does not exist" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.229051 5058 scope.go:117] "RemoveContainer" containerID="13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.229358 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09\": container with ID starting with 13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09 not found: ID does not exist" containerID="13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.229382 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09"} err="failed to get container status \"13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09\": rpc error: code = NotFound desc = could not find container \"13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09\": container with ID starting with 13b51256622b1d0183f055f432d05eb8671c7463bd34104ae93d823ee516eb09 not found: ID does not exist" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.229397 5058 scope.go:117] "RemoveContainer" containerID="133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.229674 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80\": container with ID starting with 133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80 not found: ID does not exist" containerID="133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.229696 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80"} err="failed to get container status \"133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80\": rpc error: code = NotFound desc = could not find container \"133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80\": container with ID starting with 133e95f7a9ea4d281d964748448772223ea7e4db59b189c006ef78a4c76fbb80 not found: ID does not exist" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.229709 5058 scope.go:117] "RemoveContainer" containerID="9b3da1831fb76f980056107a8a4364388de6c4f35dd0467a86ca43e04eafed44" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.247696 5058 scope.go:117] "RemoveContainer" containerID="d8be8bca483382edada5af0684b9c15ed30003719e5ed4f96a64a29209b1930d" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.278112 5058 scope.go:117] "RemoveContainer" containerID="3f1df96bec469d5c2518c3516461fee935342a6eb2f6de0d71a00e3cd6a36abf" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.314490 5058 scope.go:117] "RemoveContainer" containerID="9ee6ab347aa76c2029fca9b728228a4e733b2d2b1b8c42b5e8bdb632ac615717" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.330439 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.337508 5058 scope.go:117] "RemoveContainer" containerID="1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.375628 5058 scope.go:117] "RemoveContainer" containerID="1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.376165 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec\": container with ID starting with 1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec not found: ID does not exist" containerID="1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.376205 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec"} err="failed to get container status \"1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec\": rpc error: code = NotFound desc = could not find container \"1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec\": container with ID starting with 1ba734f3cd64e378f0ff75280d9c3c877642127e151460139e6717df45585aec not found: ID does not exist" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.376231 5058 scope.go:117] "RemoveContainer" containerID="7e5c7b3f56c44a6ade0810b15f6d0c8395be9268c61c135c78ce10faffa09108" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.400438 5058 scope.go:117] "RemoveContainer" containerID="13b2f201f21fe9a64c31bfba7e902dfeaa6156cf92a7a9d10dc0f0276012bbec" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.415052 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-config-data\") pod \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.415147 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-scripts\") pod \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.415183 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-fernet-keys\") pod \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.415205 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-combined-ca-bundle\") pod \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.415241 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-public-tls-certs\") pod \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.415256 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-internal-tls-certs\") pod \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.415307 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2x4q\" (UniqueName: \"kubernetes.io/projected/ed42eec9-b3be-4971-914c-dc5d6603b0e1-kube-api-access-m2x4q\") pod \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.415371 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-credential-keys\") pod \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\" (UID: \"ed42eec9-b3be-4971-914c-dc5d6603b0e1\") " Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.415660 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.415700 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data podName:b753342c-4a7e-4bf6-809a-3c5bc083ba6a nodeName:}" failed. No retries permitted until 2025-10-14 07:11:02.415688495 +0000 UTC m=+1410.326772301 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data") pod "rabbitmq-cell1-server-0" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a") : configmap "rabbitmq-cell1-config-data" not found Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.421996 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ed42eec9-b3be-4971-914c-dc5d6603b0e1" (UID: "ed42eec9-b3be-4971-914c-dc5d6603b0e1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.422098 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ed42eec9-b3be-4971-914c-dc5d6603b0e1" (UID: "ed42eec9-b3be-4971-914c-dc5d6603b0e1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.422692 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed42eec9-b3be-4971-914c-dc5d6603b0e1-kube-api-access-m2x4q" (OuterVolumeSpecName: "kube-api-access-m2x4q") pod "ed42eec9-b3be-4971-914c-dc5d6603b0e1" (UID: "ed42eec9-b3be-4971-914c-dc5d6603b0e1"). InnerVolumeSpecName "kube-api-access-m2x4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.424515 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-scripts" (OuterVolumeSpecName: "scripts") pod "ed42eec9-b3be-4971-914c-dc5d6603b0e1" (UID: "ed42eec9-b3be-4971-914c-dc5d6603b0e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.429033 5058 scope.go:117] "RemoveContainer" containerID="009dfda92aa7d854c85932f7c1982d950ec47b6a6ac6fe07c1206850d5895921" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.436005 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-config-data" (OuterVolumeSpecName: "config-data") pod "ed42eec9-b3be-4971-914c-dc5d6603b0e1" (UID: "ed42eec9-b3be-4971-914c-dc5d6603b0e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.454092 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ed42eec9-b3be-4971-914c-dc5d6603b0e1" (UID: "ed42eec9-b3be-4971-914c-dc5d6603b0e1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.459750 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed42eec9-b3be-4971-914c-dc5d6603b0e1" (UID: "ed42eec9-b3be-4971-914c-dc5d6603b0e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.469785 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ed42eec9-b3be-4971-914c-dc5d6603b0e1" (UID: "ed42eec9-b3be-4971-914c-dc5d6603b0e1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.516997 5058 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.517038 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.517051 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.517064 5058 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.517079 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.517092 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.517105 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed42eec9-b3be-4971-914c-dc5d6603b0e1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.517116 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2x4q\" (UniqueName: \"kubernetes.io/projected/ed42eec9-b3be-4971-914c-dc5d6603b0e1-kube-api-access-m2x4q\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.738311 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="59f969a6-6fea-40c8-9254-284205f5b3ea" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.783261 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.783906 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.784303 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.784368 5058 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.785522 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.787590 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.789071 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.789175 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.822419 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae7394b-1174-4a56-96d5-7fe0598e1343" path="/var/lib/kubelet/pods/0ae7394b-1174-4a56-96d5-7fe0598e1343/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.823920 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195c5812-232a-4b0e-9c5d-ba4b1af44126" path="/var/lib/kubelet/pods/195c5812-232a-4b0e-9c5d-ba4b1af44126/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.826427 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" path="/var/lib/kubelet/pods/2e04a9c0-9542-49e8-b185-87a9ab267e7a/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.829534 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" path="/var/lib/kubelet/pods/3afd2a77-5f33-4fed-8b1d-5ebc565a24e9/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.831161 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" path="/var/lib/kubelet/pods/42219495-c664-4b9f-a5a7-f66b0bea105a/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.836603 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61117082-2145-4117-9de9-d283039a6d7d" path="/var/lib/kubelet/pods/61117082-2145-4117-9de9-d283039a6d7d/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.838033 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" path="/var/lib/kubelet/pods/8e6c9e27-cbe7-4371-8c0c-614755871b4e/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.840374 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" path="/var/lib/kubelet/pods/b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.841706 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2af2c14-fb00-45d0-8414-8754189455a0" path="/var/lib/kubelet/pods/c2af2c14-fb00-45d0-8414-8754189455a0/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.842760 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" path="/var/lib/kubelet/pods/c9ccef3b-4aab-4a9f-acce-5a26c3f00b27/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.845203 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca7a9685-6d40-487b-aebf-f0a01ace044b" path="/var/lib/kubelet/pods/ca7a9685-6d40-487b-aebf-f0a01ace044b/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.846566 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" path="/var/lib/kubelet/pods/cc792cd6-15f5-4ef1-a383-4cecacce0df3/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.848872 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" path="/var/lib/kubelet/pods/d3d37c5f-3d6c-4f84-a681-b4bd9dffb466/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.850150 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef71f1de-0039-4574-955a-ce760eb7ea3e" path="/var/lib/kubelet/pods/ef71f1de-0039-4574-955a-ce760eb7ea3e/volumes" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.881205 5058 generic.go:334] "Generic (PLEG): container finished" podID="ed42eec9-b3be-4971-914c-dc5d6603b0e1" containerID="6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a" exitCode=0 Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.881268 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6db45f8b6f-vkgmc" event={"ID":"ed42eec9-b3be-4971-914c-dc5d6603b0e1","Type":"ContainerDied","Data":"6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a"} Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.881284 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6db45f8b6f-vkgmc" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.881353 5058 scope.go:117] "RemoveContainer" containerID="6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.881341 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6db45f8b6f-vkgmc" event={"ID":"ed42eec9-b3be-4971-914c-dc5d6603b0e1","Type":"ContainerDied","Data":"ff42a3270064298d8556ae928c31c7a7516be9ff98d610b1549b11f3705b5f6e"} Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.904291 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6db45f8b6f-vkgmc"] Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.908750 5058 scope.go:117] "RemoveContainer" containerID="6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.908997 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6db45f8b6f-vkgmc"] Oct 14 07:10:54 crc kubenswrapper[5058]: E1014 07:10:54.909335 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a\": container with ID starting with 6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a not found: ID does not exist" containerID="6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a" Oct 14 07:10:54 crc kubenswrapper[5058]: I1014 07:10:54.909413 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a"} err="failed to get container status \"6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a\": rpc error: code = NotFound desc = could not find container \"6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a\": container with ID starting with 6649247addd986a94db4112554d42b98f0c00ebdb397e0bd21bfc4f469f2321a not found: ID does not exist" Oct 14 07:10:55 crc kubenswrapper[5058]: E1014 07:10:55.039482 5058 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 14 07:10:55 crc kubenswrapper[5058]: E1014 07:10:55.039597 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data podName:59f969a6-6fea-40c8-9254-284205f5b3ea nodeName:}" failed. No retries permitted until 2025-10-14 07:11:03.039571468 +0000 UTC m=+1410.950655304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data") pod "rabbitmq-server-0" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea") : configmap "rabbitmq-config-data" not found Oct 14 07:10:55 crc kubenswrapper[5058]: E1014 07:10:55.525676 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 07:10:55 crc kubenswrapper[5058]: E1014 07:10:55.527470 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 07:10:55 crc kubenswrapper[5058]: E1014 07:10:55.529023 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 07:10:55 crc kubenswrapper[5058]: E1014 07:10:55.529078 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="5c7b2987-5495-4f39-a2d0-0c2461b31e1e" containerName="nova-cell1-conductor-conductor" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.548676 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.582228 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649418 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-server-conf\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649491 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-plugins\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649529 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8bb8\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-kube-api-access-k8bb8\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649558 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svlxp\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-kube-api-access-svlxp\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649580 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59f969a6-6fea-40c8-9254-284205f5b3ea-pod-info\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649838 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-confd\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649881 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-erlang-cookie-secret\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649910 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-plugins-conf\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649946 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-plugins-conf\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.649992 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650022 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-server-conf\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650081 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-tls\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650109 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650138 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650169 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-pod-info\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650193 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-confd\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650244 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-erlang-cookie\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650276 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-erlang-cookie\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650299 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650324 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-tls\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650346 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59f969a6-6fea-40c8-9254-284205f5b3ea-erlang-cookie-secret\") pod \"59f969a6-6fea-40c8-9254-284205f5b3ea\" (UID: \"59f969a6-6fea-40c8-9254-284205f5b3ea\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650367 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-plugins\") pod \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\" (UID: \"b753342c-4a7e-4bf6-809a-3c5bc083ba6a\") " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650368 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650831 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.650913 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.655438 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.656644 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.656724 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-kube-api-access-k8bb8" (OuterVolumeSpecName: "kube-api-access-k8bb8") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "kube-api-access-k8bb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.657237 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.657854 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.657895 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.657916 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/59f969a6-6fea-40c8-9254-284205f5b3ea-pod-info" (OuterVolumeSpecName: "pod-info") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.658279 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.658862 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-kube-api-access-svlxp" (OuterVolumeSpecName: "kube-api-access-svlxp") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "kube-api-access-svlxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.659956 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-pod-info" (OuterVolumeSpecName: "pod-info") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.660047 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.668483 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f969a6-6fea-40c8-9254-284205f5b3ea-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.671575 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.673126 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.673595 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data" (OuterVolumeSpecName: "config-data") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.677041 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data" (OuterVolumeSpecName: "config-data") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.691267 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-server-conf" (OuterVolumeSpecName: "server-conf") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.719574 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-server-conf" (OuterVolumeSpecName: "server-conf") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.749100 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b753342c-4a7e-4bf6-809a-3c5bc083ba6a" (UID: "b753342c-4a7e-4bf6-809a-3c5bc083ba6a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752321 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752409 5058 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752489 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752543 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752594 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752665 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752719 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752770 5058 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59f969a6-6fea-40c8-9254-284205f5b3ea-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752865 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.752939 5058 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753003 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8bb8\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-kube-api-access-k8bb8\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753055 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svlxp\" (UniqueName: \"kubernetes.io/projected/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-kube-api-access-svlxp\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753112 5058 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59f969a6-6fea-40c8-9254-284205f5b3ea-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753164 5058 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753214 5058 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753268 5058 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753331 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753494 5058 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59f969a6-6fea-40c8-9254-284205f5b3ea-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753576 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.753634 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b753342c-4a7e-4bf6-809a-3c5bc083ba6a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.756159 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "59f969a6-6fea-40c8-9254-284205f5b3ea" (UID: "59f969a6-6fea-40c8-9254-284205f5b3ea"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.768185 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.770543 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.855373 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.855412 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59f969a6-6fea-40c8-9254-284205f5b3ea-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.855424 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.897322 5058 generic.go:334] "Generic (PLEG): container finished" podID="59f969a6-6fea-40c8-9254-284205f5b3ea" containerID="006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d" exitCode=0 Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.897473 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.897430 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59f969a6-6fea-40c8-9254-284205f5b3ea","Type":"ContainerDied","Data":"006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d"} Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.897989 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59f969a6-6fea-40c8-9254-284205f5b3ea","Type":"ContainerDied","Data":"1bbe031be043bd216e1a6072617f3e375af4c6d19a0d6b608698078751e8a162"} Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.898077 5058 scope.go:117] "RemoveContainer" containerID="006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.900350 5058 generic.go:334] "Generic (PLEG): container finished" podID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" containerID="c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4" exitCode=0 Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.900459 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.901546 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b753342c-4a7e-4bf6-809a-3c5bc083ba6a","Type":"ContainerDied","Data":"c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4"} Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.901674 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b753342c-4a7e-4bf6-809a-3c5bc083ba6a","Type":"ContainerDied","Data":"503a01fb18f33d2d3c6232ee4e9c88dbe5e7f4095c948dc5fdc0d02ac9e15af8"} Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.921424 5058 scope.go:117] "RemoveContainer" containerID="b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.949915 5058 scope.go:117] "RemoveContainer" containerID="006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d" Oct 14 07:10:55 crc kubenswrapper[5058]: E1014 07:10:55.950431 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d\": container with ID starting with 006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d not found: ID does not exist" containerID="006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.950488 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d"} err="failed to get container status \"006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d\": rpc error: code = NotFound desc = could not find container \"006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d\": container with ID starting with 006335b12b34168908fec6ca29355ed5021006679e2ac70e93cb832bdd88ca7d not found: ID does not exist" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.950536 5058 scope.go:117] "RemoveContainer" containerID="b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20" Oct 14 07:10:55 crc kubenswrapper[5058]: E1014 07:10:55.950841 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20\": container with ID starting with b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20 not found: ID does not exist" containerID="b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.950956 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20"} err="failed to get container status \"b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20\": rpc error: code = NotFound desc = could not find container \"b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20\": container with ID starting with b088c3981678450059835e98b9aa32c9a4a555914136576226f74f961ccf7c20 not found: ID does not exist" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.951173 5058 scope.go:117] "RemoveContainer" containerID="c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.955739 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.968587 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.979769 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.991750 5058 scope.go:117] "RemoveContainer" containerID="3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c" Oct 14 07:10:55 crc kubenswrapper[5058]: I1014 07:10:55.992124 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.017052 5058 scope.go:117] "RemoveContainer" containerID="c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4" Oct 14 07:10:56 crc kubenswrapper[5058]: E1014 07:10:56.017542 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4\": container with ID starting with c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4 not found: ID does not exist" containerID="c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.017606 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4"} err="failed to get container status \"c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4\": rpc error: code = NotFound desc = could not find container \"c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4\": container with ID starting with c3b54edc67697ad05ebccd789a34bb6e4c7188dc2dc617019fb842cd35629ef4 not found: ID does not exist" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.017641 5058 scope.go:117] "RemoveContainer" containerID="3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c" Oct 14 07:10:56 crc kubenswrapper[5058]: E1014 07:10:56.018442 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c\": container with ID starting with 3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c not found: ID does not exist" containerID="3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.018531 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c"} err="failed to get container status \"3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c\": rpc error: code = NotFound desc = could not find container \"3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c\": container with ID starting with 3b1c66ec79f41850b3eda45b2b8dd3353476dc1c0c6cba14681bfe2e5ec3732c not found: ID does not exist" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.808766 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f969a6-6fea-40c8-9254-284205f5b3ea" path="/var/lib/kubelet/pods/59f969a6-6fea-40c8-9254-284205f5b3ea/volumes" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.811037 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" path="/var/lib/kubelet/pods/b753342c-4a7e-4bf6-809a-3c5bc083ba6a/volumes" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.813421 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed42eec9-b3be-4971-914c-dc5d6603b0e1" path="/var/lib/kubelet/pods/ed42eec9-b3be-4971-914c-dc5d6603b0e1/volumes" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.870703 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.916697 5058 generic.go:334] "Generic (PLEG): container finished" podID="5c7b2987-5495-4f39-a2d0-0c2461b31e1e" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" exitCode=0 Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.916765 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5c7b2987-5495-4f39-a2d0-0c2461b31e1e","Type":"ContainerDied","Data":"84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a"} Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.916821 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5c7b2987-5495-4f39-a2d0-0c2461b31e1e","Type":"ContainerDied","Data":"b225081c585653bb06f321832b00b75ebbcb889126ad1aa580ec341f65c9eeae"} Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.916843 5058 scope.go:117] "RemoveContainer" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.916943 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.956542 5058 scope.go:117] "RemoveContainer" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" Oct 14 07:10:56 crc kubenswrapper[5058]: E1014 07:10:56.957128 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a\": container with ID starting with 84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a not found: ID does not exist" containerID="84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.957174 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a"} err="failed to get container status \"84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a\": rpc error: code = NotFound desc = could not find container \"84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a\": container with ID starting with 84ae2447b5cc8ac4fe03a113fbb99be5d0320a8fd5e73d5eb080f1a2b67bd00a not found: ID does not exist" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.969558 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-combined-ca-bundle\") pod \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.969671 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-config-data\") pod \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.969837 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnc9g\" (UniqueName: \"kubernetes.io/projected/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-kube-api-access-dnc9g\") pod \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\" (UID: \"5c7b2987-5495-4f39-a2d0-0c2461b31e1e\") " Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.974829 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-kube-api-access-dnc9g" (OuterVolumeSpecName: "kube-api-access-dnc9g") pod "5c7b2987-5495-4f39-a2d0-0c2461b31e1e" (UID: "5c7b2987-5495-4f39-a2d0-0c2461b31e1e"). InnerVolumeSpecName "kube-api-access-dnc9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:10:56 crc kubenswrapper[5058]: I1014 07:10:56.992639 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c7b2987-5495-4f39-a2d0-0c2461b31e1e" (UID: "5c7b2987-5495-4f39-a2d0-0c2461b31e1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:57 crc kubenswrapper[5058]: I1014 07:10:57.011658 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-config-data" (OuterVolumeSpecName: "config-data") pod "5c7b2987-5495-4f39-a2d0-0c2461b31e1e" (UID: "5c7b2987-5495-4f39-a2d0-0c2461b31e1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:10:57 crc kubenswrapper[5058]: I1014 07:10:57.071878 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:57 crc kubenswrapper[5058]: I1014 07:10:57.071920 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnc9g\" (UniqueName: \"kubernetes.io/projected/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-kube-api-access-dnc9g\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:57 crc kubenswrapper[5058]: I1014 07:10:57.071933 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7b2987-5495-4f39-a2d0-0c2461b31e1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:10:57 crc kubenswrapper[5058]: I1014 07:10:57.251386 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 07:10:57 crc kubenswrapper[5058]: I1014 07:10:57.259170 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 07:10:58 crc kubenswrapper[5058]: I1014 07:10:58.807958 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7b2987-5495-4f39-a2d0-0c2461b31e1e" path="/var/lib/kubelet/pods/5c7b2987-5495-4f39-a2d0-0c2461b31e1e/volumes" Oct 14 07:10:59 crc kubenswrapper[5058]: E1014 07:10:59.782878 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:10:59 crc kubenswrapper[5058]: E1014 07:10:59.783484 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:10:59 crc kubenswrapper[5058]: E1014 07:10:59.784484 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:10:59 crc kubenswrapper[5058]: E1014 07:10:59.784566 5058 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" Oct 14 07:10:59 crc kubenswrapper[5058]: E1014 07:10:59.784995 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:10:59 crc kubenswrapper[5058]: E1014 07:10:59.786611 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:10:59 crc kubenswrapper[5058]: E1014 07:10:59.789288 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:10:59 crc kubenswrapper[5058]: E1014 07:10:59.789372 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" Oct 14 07:11:00 crc kubenswrapper[5058]: I1014 07:11:00.364747 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: i/o timeout" Oct 14 07:11:04 crc kubenswrapper[5058]: E1014 07:11:04.782741 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:11:04 crc kubenswrapper[5058]: E1014 07:11:04.783844 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:11:04 crc kubenswrapper[5058]: E1014 07:11:04.784401 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:11:04 crc kubenswrapper[5058]: E1014 07:11:04.784445 5058 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" Oct 14 07:11:04 crc kubenswrapper[5058]: E1014 07:11:04.784677 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:11:04 crc kubenswrapper[5058]: E1014 07:11:04.787073 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:11:04 crc kubenswrapper[5058]: E1014 07:11:04.789506 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:11:04 crc kubenswrapper[5058]: E1014 07:11:04.789634 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.014645 5058 generic.go:334] "Generic (PLEG): container finished" podID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerID="65ea6b846599e21eb16e6adb81ef1c3d6526e566be9b38e51f208cad25bb0ef5" exitCode=0 Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.014996 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8d77dbf7-m7vgh" event={"ID":"b7c83841-4ff0-41a0-be22-72af1a0f2bef","Type":"ContainerDied","Data":"65ea6b846599e21eb16e6adb81ef1c3d6526e566be9b38e51f208cad25bb0ef5"} Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.109999 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.298242 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-internal-tls-certs\") pod \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.298353 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-ovndb-tls-certs\") pod \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.298395 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-config\") pod \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.298428 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-combined-ca-bundle\") pod \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.298476 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7c7h\" (UniqueName: \"kubernetes.io/projected/b7c83841-4ff0-41a0-be22-72af1a0f2bef-kube-api-access-z7c7h\") pod \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.298521 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-public-tls-certs\") pod \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.298623 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-httpd-config\") pod \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\" (UID: \"b7c83841-4ff0-41a0-be22-72af1a0f2bef\") " Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.306135 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b7c83841-4ff0-41a0-be22-72af1a0f2bef" (UID: "b7c83841-4ff0-41a0-be22-72af1a0f2bef"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.311769 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c83841-4ff0-41a0-be22-72af1a0f2bef-kube-api-access-z7c7h" (OuterVolumeSpecName: "kube-api-access-z7c7h") pod "b7c83841-4ff0-41a0-be22-72af1a0f2bef" (UID: "b7c83841-4ff0-41a0-be22-72af1a0f2bef"). InnerVolumeSpecName "kube-api-access-z7c7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.361504 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b7c83841-4ff0-41a0-be22-72af1a0f2bef" (UID: "b7c83841-4ff0-41a0-be22-72af1a0f2bef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.363113 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7c83841-4ff0-41a0-be22-72af1a0f2bef" (UID: "b7c83841-4ff0-41a0-be22-72af1a0f2bef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.374852 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b7c83841-4ff0-41a0-be22-72af1a0f2bef" (UID: "b7c83841-4ff0-41a0-be22-72af1a0f2bef"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.378014 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-config" (OuterVolumeSpecName: "config") pod "b7c83841-4ff0-41a0-be22-72af1a0f2bef" (UID: "b7c83841-4ff0-41a0-be22-72af1a0f2bef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.385176 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b7c83841-4ff0-41a0-be22-72af1a0f2bef" (UID: "b7c83841-4ff0-41a0-be22-72af1a0f2bef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.402113 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7c7h\" (UniqueName: \"kubernetes.io/projected/b7c83841-4ff0-41a0-be22-72af1a0f2bef-kube-api-access-z7c7h\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.402294 5058 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.402485 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.402540 5058 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.402561 5058 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.402616 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-config\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:05 crc kubenswrapper[5058]: I1014 07:11:05.402639 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c83841-4ff0-41a0-be22-72af1a0f2bef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:06 crc kubenswrapper[5058]: I1014 07:11:06.032238 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8d77dbf7-m7vgh" event={"ID":"b7c83841-4ff0-41a0-be22-72af1a0f2bef","Type":"ContainerDied","Data":"8284104a9035afc015b8dbef1411b9bccb954ac227611a2f308b4754fbeb604c"} Oct 14 07:11:06 crc kubenswrapper[5058]: I1014 07:11:06.032303 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8d77dbf7-m7vgh" Oct 14 07:11:06 crc kubenswrapper[5058]: I1014 07:11:06.032647 5058 scope.go:117] "RemoveContainer" containerID="cd295ee61f567f1a94e09d2c0e675720e548cfe9f3dbd734af3e07368dc83f14" Oct 14 07:11:06 crc kubenswrapper[5058]: I1014 07:11:06.072515 5058 scope.go:117] "RemoveContainer" containerID="65ea6b846599e21eb16e6adb81ef1c3d6526e566be9b38e51f208cad25bb0ef5" Oct 14 07:11:06 crc kubenswrapper[5058]: I1014 07:11:06.072729 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8d77dbf7-m7vgh"] Oct 14 07:11:06 crc kubenswrapper[5058]: I1014 07:11:06.086348 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6b8d77dbf7-m7vgh"] Oct 14 07:11:06 crc kubenswrapper[5058]: I1014 07:11:06.803389 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" path="/var/lib/kubelet/pods/b7c83841-4ff0-41a0-be22-72af1a0f2bef/volumes" Oct 14 07:11:09 crc kubenswrapper[5058]: E1014 07:11:09.783167 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:11:09 crc kubenswrapper[5058]: E1014 07:11:09.784334 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:11:09 crc kubenswrapper[5058]: E1014 07:11:09.785348 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:11:09 crc kubenswrapper[5058]: E1014 07:11:09.785424 5058 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" Oct 14 07:11:09 crc kubenswrapper[5058]: E1014 07:11:09.785365 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:11:09 crc kubenswrapper[5058]: E1014 07:11:09.788210 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:11:09 crc kubenswrapper[5058]: E1014 07:11:09.790271 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:11:09 crc kubenswrapper[5058]: E1014 07:11:09.790355 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" Oct 14 07:11:14 crc kubenswrapper[5058]: E1014 07:11:14.782141 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:11:14 crc kubenswrapper[5058]: E1014 07:11:14.783123 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:11:14 crc kubenswrapper[5058]: E1014 07:11:14.783560 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 14 07:11:14 crc kubenswrapper[5058]: E1014 07:11:14.783599 5058 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" Oct 14 07:11:14 crc kubenswrapper[5058]: E1014 07:11:14.784003 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:11:14 crc kubenswrapper[5058]: E1014 07:11:14.786168 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:11:14 crc kubenswrapper[5058]: E1014 07:11:14.787784 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 14 07:11:14 crc kubenswrapper[5058]: E1014 07:11:14.787876 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-wskz9" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.161009 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wskz9_9fd6e5f9-1445-4903-9025-d468f23f82d4/ovs-vswitchd/0.log" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.162917 5058 generic.go:334] "Generic (PLEG): container finished" podID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" exitCode=137 Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.162974 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wskz9" event={"ID":"9fd6e5f9-1445-4903-9025-d468f23f82d4","Type":"ContainerDied","Data":"a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c"} Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.414943 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wskz9_9fd6e5f9-1445-4903-9025-d468f23f82d4/ovs-vswitchd/0.log" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.416062 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597019 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-run\") pod \"9fd6e5f9-1445-4903-9025-d468f23f82d4\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597139 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fd6e5f9-1445-4903-9025-d468f23f82d4-scripts\") pod \"9fd6e5f9-1445-4903-9025-d468f23f82d4\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597200 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6crf\" (UniqueName: \"kubernetes.io/projected/9fd6e5f9-1445-4903-9025-d468f23f82d4-kube-api-access-f6crf\") pod \"9fd6e5f9-1445-4903-9025-d468f23f82d4\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597258 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-run" (OuterVolumeSpecName: "var-run") pod "9fd6e5f9-1445-4903-9025-d468f23f82d4" (UID: "9fd6e5f9-1445-4903-9025-d468f23f82d4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597314 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-log\") pod \"9fd6e5f9-1445-4903-9025-d468f23f82d4\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597404 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-etc-ovs\") pod \"9fd6e5f9-1445-4903-9025-d468f23f82d4\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597479 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-lib\") pod \"9fd6e5f9-1445-4903-9025-d468f23f82d4\" (UID: \"9fd6e5f9-1445-4903-9025-d468f23f82d4\") " Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597501 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-log" (OuterVolumeSpecName: "var-log") pod "9fd6e5f9-1445-4903-9025-d468f23f82d4" (UID: "9fd6e5f9-1445-4903-9025-d468f23f82d4"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597590 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "9fd6e5f9-1445-4903-9025-d468f23f82d4" (UID: "9fd6e5f9-1445-4903-9025-d468f23f82d4"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.597615 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-lib" (OuterVolumeSpecName: "var-lib") pod "9fd6e5f9-1445-4903-9025-d468f23f82d4" (UID: "9fd6e5f9-1445-4903-9025-d468f23f82d4"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.598090 5058 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-etc-ovs\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.598149 5058 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-lib\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.598176 5058 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-run\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.598200 5058 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9fd6e5f9-1445-4903-9025-d468f23f82d4-var-log\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.599468 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd6e5f9-1445-4903-9025-d468f23f82d4-scripts" (OuterVolumeSpecName: "scripts") pod "9fd6e5f9-1445-4903-9025-d468f23f82d4" (UID: "9fd6e5f9-1445-4903-9025-d468f23f82d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.606637 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd6e5f9-1445-4903-9025-d468f23f82d4-kube-api-access-f6crf" (OuterVolumeSpecName: "kube-api-access-f6crf") pod "9fd6e5f9-1445-4903-9025-d468f23f82d4" (UID: "9fd6e5f9-1445-4903-9025-d468f23f82d4"). InnerVolumeSpecName "kube-api-access-f6crf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.700030 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fd6e5f9-1445-4903-9025-d468f23f82d4-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:17 crc kubenswrapper[5058]: I1014 07:11:17.700165 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6crf\" (UniqueName: \"kubernetes.io/projected/9fd6e5f9-1445-4903-9025-d468f23f82d4-kube-api-access-f6crf\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.185285 5058 generic.go:334] "Generic (PLEG): container finished" podID="19857bcc-939e-4543-ae17-09d142baebf2" containerID="f3d2c8c9184a355d31a5639a1537fe51d6e252644fd1b6581a47121a0d587325" exitCode=137 Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.185350 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"f3d2c8c9184a355d31a5639a1537fe51d6e252644fd1b6581a47121a0d587325"} Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.188227 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wskz9_9fd6e5f9-1445-4903-9025-d468f23f82d4/ovs-vswitchd/0.log" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.189152 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wskz9" event={"ID":"9fd6e5f9-1445-4903-9025-d468f23f82d4","Type":"ContainerDied","Data":"19bfff0a04242d1a0a8947c8c920e5d2f23c5f2b7cf6be922b8fe0c30c783ad4"} Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.189189 5058 scope.go:117] "RemoveContainer" containerID="a429749ca73309b192138784be16bf5ff5a50868b308d62f1c270b5926b73b3c" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.189213 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wskz9" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.220931 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-wskz9"] Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.226355 5058 scope.go:117] "RemoveContainer" containerID="3f784a27184ef496b13496dc61e3c5df2f7617cade8231e8c6b730a1814befb3" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.235299 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-wskz9"] Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.293453 5058 scope.go:117] "RemoveContainer" containerID="2a4e34472a8301fcad7040c61a4f765d6dca146ab018fd225366bade475734ce" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.293523 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.410385 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"19857bcc-939e-4543-ae17-09d142baebf2\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.410940 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-cache\") pod \"19857bcc-939e-4543-ae17-09d142baebf2\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.411196 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-lock\") pod \"19857bcc-939e-4543-ae17-09d142baebf2\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.411459 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rfbv\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-kube-api-access-9rfbv\") pod \"19857bcc-939e-4543-ae17-09d142baebf2\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.411688 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-cache" (OuterVolumeSpecName: "cache") pod "19857bcc-939e-4543-ae17-09d142baebf2" (UID: "19857bcc-939e-4543-ae17-09d142baebf2"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.411861 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-lock" (OuterVolumeSpecName: "lock") pod "19857bcc-939e-4543-ae17-09d142baebf2" (UID: "19857bcc-939e-4543-ae17-09d142baebf2"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.412019 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") pod \"19857bcc-939e-4543-ae17-09d142baebf2\" (UID: \"19857bcc-939e-4543-ae17-09d142baebf2\") " Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.412855 5058 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-lock\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.413137 5058 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/19857bcc-939e-4543-ae17-09d142baebf2-cache\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.416838 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "19857bcc-939e-4543-ae17-09d142baebf2" (UID: "19857bcc-939e-4543-ae17-09d142baebf2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.416880 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "swift") pod "19857bcc-939e-4543-ae17-09d142baebf2" (UID: "19857bcc-939e-4543-ae17-09d142baebf2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.418029 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-kube-api-access-9rfbv" (OuterVolumeSpecName: "kube-api-access-9rfbv") pod "19857bcc-939e-4543-ae17-09d142baebf2" (UID: "19857bcc-939e-4543-ae17-09d142baebf2"). InnerVolumeSpecName "kube-api-access-9rfbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.514463 5058 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.514540 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.514565 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rfbv\" (UniqueName: \"kubernetes.io/projected/19857bcc-939e-4543-ae17-09d142baebf2-kube-api-access-9rfbv\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.541778 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.616586 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:18 crc kubenswrapper[5058]: I1014 07:11:18.809710 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" path="/var/lib/kubelet/pods/9fd6e5f9-1445-4903-9025-d468f23f82d4/volumes" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.215746 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"19857bcc-939e-4543-ae17-09d142baebf2","Type":"ContainerDied","Data":"855a298293805aa38171276606596651b3f61ee18614f5c31ef42a881c329e46"} Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.217003 5058 scope.go:117] "RemoveContainer" containerID="f3d2c8c9184a355d31a5639a1537fe51d6e252644fd1b6581a47121a0d587325" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.215888 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.263635 5058 scope.go:117] "RemoveContainer" containerID="f29643cf82745f7e73e78c4135a5914e8cf5724cd7128107573a6ee8be46e9b0" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.270246 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.277301 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.300399 5058 scope.go:117] "RemoveContainer" containerID="3f71ae0f3608a3c488d1248099617a68fad24788a23a6eb403537e7c705bc211" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.330142 5058 scope.go:117] "RemoveContainer" containerID="33d2eddcb88d038d345ef3bf102f653431e0f85ff697aaaf69ad45de80bb0ebc" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.350515 5058 scope.go:117] "RemoveContainer" containerID="509b41a491460621f4e48c030ec3096e244144015e697f031ef0d4a24b2efb95" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.368176 5058 scope.go:117] "RemoveContainer" containerID="6e399dec260edbe3841b098d9b9f1bf0dfe804d3a5aa01facc0ec4b2d640b130" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.385729 5058 scope.go:117] "RemoveContainer" containerID="343434d65ccc14910a7fcbead2fb88c5672e280565333bc8a1847ece2eded17c" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.413357 5058 scope.go:117] "RemoveContainer" containerID="96660315f94d74277f9b50658993503a1b43f9bef9a95f56461f122ccec07ad5" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.440491 5058 scope.go:117] "RemoveContainer" containerID="537911dbd262660ec60f6e9b0405d1a9f83ab10683dad7e3074f39dc6c1a11e4" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.469442 5058 scope.go:117] "RemoveContainer" containerID="431d79aac23d72bdbe76355998d8a1a48b5be7faa4f9c5fcb39335a05f1f4018" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.491641 5058 scope.go:117] "RemoveContainer" containerID="1597a4004ccd7362f11728ae0a39556e442ff02bd29578ce9c6c13e045eb6f2a" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.519231 5058 scope.go:117] "RemoveContainer" containerID="54425f7ca236aef845e21698b62205fe14abb46d0ffcc6865557173aab0f934a" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.544979 5058 scope.go:117] "RemoveContainer" containerID="a3e3288d52021b1c5dee76e985c4bf2ef23c6520d23e5ef4eb02fe977bc949b4" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.581561 5058 scope.go:117] "RemoveContainer" containerID="6d0292a646c2aabd8315d14682a37635c0746d89b568115520d8211b6b5b0c40" Oct 14 07:11:19 crc kubenswrapper[5058]: I1014 07:11:19.602314 5058 scope.go:117] "RemoveContainer" containerID="06b49fc229b22caf4d176c8d244f32c702aa60ee0c1bc3f0ee68302b65265d19" Oct 14 07:11:20 crc kubenswrapper[5058]: I1014 07:11:20.807490 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19857bcc-939e-4543-ae17-09d142baebf2" path="/var/lib/kubelet/pods/19857bcc-939e-4543-ae17-09d142baebf2/volumes" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.265447 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fhw7t"] Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266368 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-metadata" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266391 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-metadata" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266416 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerName="barbican-keystone-listener-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266428 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerName="barbican-keystone-listener-log" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266446 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266456 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266473 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6771f490-1ad7-4f61-98a0-d52df29ef7c8" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266483 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6771f490-1ad7-4f61-98a0-d52df29ef7c8" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266503 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195c5812-232a-4b0e-9c5d-ba4b1af44126" containerName="nova-cell0-conductor-conductor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266515 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="195c5812-232a-4b0e-9c5d-ba4b1af44126" containerName="nova-cell0-conductor-conductor" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266538 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a92fd66-33d0-4026-9f89-4fe1b7a57478" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266547 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a92fd66-33d0-4026-9f89-4fe1b7a57478" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266566 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ebf4e1-4257-4a45-a776-734a016d955f" containerName="probe" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266575 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ebf4e1-4257-4a45-a776-734a016d955f" containerName="probe" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266589 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="ceilometer-central-agent" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266599 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="ceilometer-central-agent" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266614 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266626 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-log" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266643 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266652 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-server" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266671 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ebf4e1-4257-4a45-a776-734a016d955f" containerName="cinder-scheduler" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266682 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ebf4e1-4257-4a45-a776-734a016d955f" containerName="cinder-scheduler" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266701 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerName="neutron-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266712 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerName="neutron-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266723 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-expirer" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266733 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-expirer" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266745 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda633cb-5e3d-42da-b21e-be6d5a984f2f" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266753 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda633cb-5e3d-42da-b21e-be6d5a984f2f" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266768 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="swift-recon-cron" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266775 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="swift-recon-cron" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266783 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerName="neutron-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266790 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerName="neutron-api" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266833 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61117082-2145-4117-9de9-d283039a6d7d" containerName="glance-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266843 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="61117082-2145-4117-9de9-d283039a6d7d" containerName="glance-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266854 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-updater" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266862 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-updater" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266876 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266884 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266896 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerName="barbican-worker" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266903 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerName="barbican-worker" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266919 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f969a6-6fea-40c8-9254-284205f5b3ea" containerName="setup-container" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266926 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f969a6-6fea-40c8-9254-284205f5b3ea" containerName="setup-container" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266940 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="ovn-northd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266947 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="ovn-northd" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266957 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266964 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266972 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="ceilometer-notification-agent" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.266979 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="ceilometer-notification-agent" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.266993 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267016 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267030 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerName="ovsdbserver-sb" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267037 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerName="ovsdbserver-sb" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267049 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67eb8142-3269-43d9-a2bc-a6afbaa991f9" containerName="kube-state-metrics" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267058 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="67eb8142-3269-43d9-a2bc-a6afbaa991f9" containerName="kube-state-metrics" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267069 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267077 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-log" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267086 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267093 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-server" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267105 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7b2987-5495-4f39-a2d0-0c2461b31e1e" containerName="nova-cell1-conductor-conductor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267112 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7b2987-5495-4f39-a2d0-0c2461b31e1e" containerName="nova-cell1-conductor-conductor" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267126 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7a9685-6d40-487b-aebf-f0a01ace044b" containerName="galera" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267133 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7a9685-6d40-487b-aebf-f0a01ace044b" containerName="galera" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267142 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerName="cinder-api-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267149 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerName="cinder-api-log" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267163 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61117082-2145-4117-9de9-d283039a6d7d" containerName="glance-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267171 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="61117082-2145-4117-9de9-d283039a6d7d" containerName="glance-log" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267183 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7a9685-6d40-487b-aebf-f0a01ace044b" containerName="mysql-bootstrap" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267191 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7a9685-6d40-487b-aebf-f0a01ace044b" containerName="mysql-bootstrap" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267198 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-auditor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267205 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-auditor" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267216 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-updater" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267223 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-updater" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267237 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267244 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-api" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267254 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-auditor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267261 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-auditor" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267269 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-replicator" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267277 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-replicator" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267285 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267292 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api-log" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267301 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267308 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-server" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267317 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerName="cinder-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267324 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerName="cinder-api" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267336 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2af2c14-fb00-45d0-8414-8754189455a0" containerName="memcached" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267343 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2af2c14-fb00-45d0-8414-8754189455a0" containerName="memcached" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267355 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267362 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267373 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267379 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-api" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267389 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" containerName="setup-container" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267397 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" containerName="setup-container" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267406 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c560dc37-2c84-4468-800c-d90d8f8c158e" containerName="dnsmasq-dns" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267413 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c560dc37-2c84-4468-800c-d90d8f8c158e" containerName="dnsmasq-dns" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267425 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133d4cdf-58ed-4544-8f05-328587a2b701" containerName="mysql-bootstrap" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267432 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="133d4cdf-58ed-4544-8f05-328587a2b701" containerName="mysql-bootstrap" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267440 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server-init" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267447 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server-init" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267459 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-replicator" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267467 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-replicator" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267479 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerName="glance-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267487 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerName="glance-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267497 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-auditor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267504 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-auditor" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267512 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="proxy-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267520 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="proxy-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267532 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6faed14d-d25e-43b8-96db-c64b6b3feece" containerName="ovn-controller" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267539 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6faed14d-d25e-43b8-96db-c64b6b3feece" containerName="ovn-controller" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267550 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267558 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267567 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="sg-core" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267577 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="sg-core" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267588 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c560dc37-2c84-4468-800c-d90d8f8c158e" containerName="init" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267595 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c560dc37-2c84-4468-800c-d90d8f8c158e" containerName="init" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267610 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed42eec9-b3be-4971-914c-dc5d6603b0e1" containerName="keystone-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267617 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed42eec9-b3be-4971-914c-dc5d6603b0e1" containerName="keystone-api" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267634 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerName="barbican-worker-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267641 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerName="barbican-worker-log" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267652 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267659 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-server" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267669 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerName="barbican-keystone-listener" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267676 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerName="barbican-keystone-listener" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267687 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerName="ovsdbserver-nb" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267694 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerName="ovsdbserver-nb" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267709 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267716 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267727 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" containerName="rabbitmq" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267734 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" containerName="rabbitmq" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267749 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267756 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-log" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267770 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fe9fcd-194a-45e1-b1c3-08bdd5331810" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267777 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fe9fcd-194a-45e1-b1c3-08bdd5331810" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267789 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f969a6-6fea-40c8-9254-284205f5b3ea" containerName="rabbitmq" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267819 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f969a6-6fea-40c8-9254-284205f5b3ea" containerName="rabbitmq" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267832 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerName="glance-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267840 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerName="glance-log" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267849 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267857 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267865 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-replicator" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267873 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-replicator" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267883 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18689556-e39f-4c5d-add8-aa934c468f1b" containerName="nova-scheduler-scheduler" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267890 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="18689556-e39f-4c5d-add8-aa934c468f1b" containerName="nova-scheduler-scheduler" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267902 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133d4cdf-58ed-4544-8f05-328587a2b701" containerName="galera" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.267912 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="133d4cdf-58ed-4544-8f05-328587a2b701" containerName="galera" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.267922 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-reaper" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.268257 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-reaper" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.268280 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="rsync" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.268293 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="rsync" Oct 14 07:11:27 crc kubenswrapper[5058]: E1014 07:11:27.268311 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df614c68-4293-4c6c-a18e-e87dfcf8fe26" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.268321 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="df614c68-4293-4c6c-a18e-e87dfcf8fe26" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269021 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269053 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269064 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerName="ovsdbserver-nb" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269077 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="ceilometer-central-agent" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269090 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c560dc37-2c84-4468-800c-d90d8f8c158e" containerName="dnsmasq-dns" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269101 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-expirer" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269112 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-replicator" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269122 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="proxy-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269136 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerName="ovsdbserver-sb" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269145 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269157 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="61117082-2145-4117-9de9-d283039a6d7d" containerName="glance-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269173 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda633cb-5e3d-42da-b21e-be6d5a984f2f" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269193 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerName="cinder-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269212 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerName="barbican-keystone-listener-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269225 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-metadata" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269238 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="18689556-e39f-4c5d-add8-aa934c468f1b" containerName="nova-scheduler-scheduler" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269254 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fe9fcd-194a-45e1-b1c3-08bdd5331810" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269271 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="ceilometer-notification-agent" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269315 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="df614c68-4293-4c6c-a18e-e87dfcf8fe26" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269335 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f969a6-6fea-40c8-9254-284205f5b3ea" containerName="rabbitmq" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269346 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269361 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="swift-recon-cron" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269376 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef71f1de-0039-4574-955a-ce760eb7ea3e" containerName="barbican-keystone-listener" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269391 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d9700c-c7fa-4020-939c-ce42c1b3afe8" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269409 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6771f490-1ad7-4f61-98a0-d52df29ef7c8" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269424 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca7a9685-6d40-487b-aebf-f0a01ace044b" containerName="galera" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269440 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b753342c-4a7e-4bf6-809a-3c5bc083ba6a" containerName="rabbitmq" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269451 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a36ee5-eb19-4390-a07e-3e22a191ad58" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269461 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-replicator" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269478 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269491 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="rsync" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269501 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed42eec9-b3be-4971-914c-dc5d6603b0e1" containerName="keystone-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269518 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerName="glance-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269534 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-auditor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269549 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerName="barbican-worker" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269564 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afd2a77-5f33-4fed-8b1d-5ebc565a24e9" containerName="placement-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269577 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="195c5812-232a-4b0e-9c5d-ba4b1af44126" containerName="nova-cell0-conductor-conductor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269591 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269604 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc107f0-eaa6-4aa4-b5ea-05bcc76f0409" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269620 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7b2987-5495-4f39-a2d0-0c2461b31e1e" containerName="nova-cell1-conductor-conductor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269635 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="133d4cdf-58ed-4544-8f05-328587a2b701" containerName="galera" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269646 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="ovn-northd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269662 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-reaper" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269672 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-auditor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269692 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="account-auditor" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269705 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ccef3b-4aab-4a9f-acce-5a26c3f00b27" containerName="barbican-worker-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269717 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269736 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovsdb-server" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269746 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-updater" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269764 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2af2c14-fb00-45d0-8414-8754189455a0" containerName="memcached" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269774 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="42219495-c664-4b9f-a5a7-f66b0bea105a" containerName="sg-core" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269790 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e04a9c0-9542-49e8-b185-87a9ab267e7a" containerName="nova-metadata-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269830 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6faed14d-d25e-43b8-96db-c64b6b3feece" containerName="ovn-controller" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269843 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="container-replicator" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269853 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="67eb8142-3269-43d9-a2bc-a6afbaa991f9" containerName="kube-state-metrics" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269867 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="19857bcc-939e-4543-ae17-09d142baebf2" containerName="object-updater" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269883 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c7b950-e1ff-4b4d-8531-68d7c9ba6e7c" containerName="barbican-api-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269894 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae7394b-1174-4a56-96d5-7fe0598e1343" containerName="glance-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269909 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0820570e-22cb-4f0b-9ee4-f5237ccdfdff" containerName="proxy-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269924 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc792cd6-15f5-4ef1-a383-4cecacce0df3" containerName="cinder-api-log" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269936 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d37c5f-3d6c-4f84-a681-b4bd9dffb466" containerName="openstack-network-exporter" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269946 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerName="neutron-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269959 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c83841-4ff0-41a0-be22-72af1a0f2bef" containerName="neutron-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269969 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a92fd66-33d0-4026-9f89-4fe1b7a57478" containerName="mariadb-account-delete" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269981 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ebf4e1-4257-4a45-a776-734a016d955f" containerName="cinder-scheduler" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.269993 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ebf4e1-4257-4a45-a776-734a016d955f" containerName="probe" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.270003 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="61117082-2145-4117-9de9-d283039a6d7d" containerName="glance-httpd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.270011 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd6e5f9-1445-4903-9025-d468f23f82d4" containerName="ovs-vswitchd" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.270022 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6c9e27-cbe7-4371-8c0c-614755871b4e" containerName="nova-api-api" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.272187 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.294186 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhw7t"] Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.364008 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-catalog-content\") pod \"redhat-operators-fhw7t\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.364333 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp4z8\" (UniqueName: \"kubernetes.io/projected/4719d3bd-e085-4913-ba43-7b928d57b181-kube-api-access-rp4z8\") pod \"redhat-operators-fhw7t\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.364576 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-utilities\") pod \"redhat-operators-fhw7t\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.466023 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-catalog-content\") pod \"redhat-operators-fhw7t\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.466356 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp4z8\" (UniqueName: \"kubernetes.io/projected/4719d3bd-e085-4913-ba43-7b928d57b181-kube-api-access-rp4z8\") pod \"redhat-operators-fhw7t\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.466457 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-utilities\") pod \"redhat-operators-fhw7t\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.467259 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-utilities\") pod \"redhat-operators-fhw7t\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.467249 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-catalog-content\") pod \"redhat-operators-fhw7t\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.495848 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp4z8\" (UniqueName: \"kubernetes.io/projected/4719d3bd-e085-4913-ba43-7b928d57b181-kube-api-access-rp4z8\") pod \"redhat-operators-fhw7t\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:27 crc kubenswrapper[5058]: I1014 07:11:27.598523 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:28 crc kubenswrapper[5058]: I1014 07:11:28.054293 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhw7t"] Oct 14 07:11:28 crc kubenswrapper[5058]: I1014 07:11:28.337263 5058 generic.go:334] "Generic (PLEG): container finished" podID="4719d3bd-e085-4913-ba43-7b928d57b181" containerID="2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64" exitCode=0 Oct 14 07:11:28 crc kubenswrapper[5058]: I1014 07:11:28.337371 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhw7t" event={"ID":"4719d3bd-e085-4913-ba43-7b928d57b181","Type":"ContainerDied","Data":"2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64"} Oct 14 07:11:28 crc kubenswrapper[5058]: I1014 07:11:28.337589 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhw7t" event={"ID":"4719d3bd-e085-4913-ba43-7b928d57b181","Type":"ContainerStarted","Data":"6c03e3f99b803f6bc9462ecae2aa37d67d85f4e20368d9708fa9d7ee9f3af5db"} Oct 14 07:11:28 crc kubenswrapper[5058]: I1014 07:11:28.338683 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 07:11:34 crc kubenswrapper[5058]: I1014 07:11:34.758453 5058 scope.go:117] "RemoveContainer" containerID="754811d971b5ee596d7950d9970b2744d554f9a03ec5093c8a2ff92a72eb0be3" Oct 14 07:11:35 crc kubenswrapper[5058]: I1014 07:11:35.401839 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhw7t" event={"ID":"4719d3bd-e085-4913-ba43-7b928d57b181","Type":"ContainerStarted","Data":"ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6"} Oct 14 07:11:36 crc kubenswrapper[5058]: I1014 07:11:36.416290 5058 generic.go:334] "Generic (PLEG): container finished" podID="4719d3bd-e085-4913-ba43-7b928d57b181" containerID="ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6" exitCode=0 Oct 14 07:11:36 crc kubenswrapper[5058]: I1014 07:11:36.416366 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhw7t" event={"ID":"4719d3bd-e085-4913-ba43-7b928d57b181","Type":"ContainerDied","Data":"ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6"} Oct 14 07:11:37 crc kubenswrapper[5058]: I1014 07:11:37.430679 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhw7t" event={"ID":"4719d3bd-e085-4913-ba43-7b928d57b181","Type":"ContainerStarted","Data":"1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca"} Oct 14 07:11:37 crc kubenswrapper[5058]: I1014 07:11:37.452628 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fhw7t" podStartSLOduration=1.860147352 podStartE2EDuration="10.452605436s" podCreationTimestamp="2025-10-14 07:11:27 +0000 UTC" firstStartedPulling="2025-10-14 07:11:28.338493513 +0000 UTC m=+1436.249577319" lastFinishedPulling="2025-10-14 07:11:36.930951567 +0000 UTC m=+1444.842035403" observedRunningTime="2025-10-14 07:11:37.446635298 +0000 UTC m=+1445.357719164" watchObservedRunningTime="2025-10-14 07:11:37.452605436 +0000 UTC m=+1445.363689252" Oct 14 07:11:37 crc kubenswrapper[5058]: I1014 07:11:37.599450 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:37 crc kubenswrapper[5058]: I1014 07:11:37.599519 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:38 crc kubenswrapper[5058]: I1014 07:11:38.637779 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fhw7t" podUID="4719d3bd-e085-4913-ba43-7b928d57b181" containerName="registry-server" probeResult="failure" output=< Oct 14 07:11:38 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 07:11:38 crc kubenswrapper[5058]: > Oct 14 07:11:47 crc kubenswrapper[5058]: I1014 07:11:47.675185 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:47 crc kubenswrapper[5058]: I1014 07:11:47.738004 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 07:11:47 crc kubenswrapper[5058]: I1014 07:11:47.806453 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhw7t"] Oct 14 07:11:47 crc kubenswrapper[5058]: I1014 07:11:47.914278 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7zknx"] Oct 14 07:11:47 crc kubenswrapper[5058]: I1014 07:11:47.914809 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7zknx" podUID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerName="registry-server" containerID="cri-o://830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1" gracePeriod=2 Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.367755 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.414038 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-catalog-content\") pod \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.414114 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-utilities\") pod \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.414135 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jnf4\" (UniqueName: \"kubernetes.io/projected/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-kube-api-access-5jnf4\") pod \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\" (UID: \"b90f51c3-20e7-46f3-af7f-05ac7d41bb66\") " Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.416075 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-utilities" (OuterVolumeSpecName: "utilities") pod "b90f51c3-20e7-46f3-af7f-05ac7d41bb66" (UID: "b90f51c3-20e7-46f3-af7f-05ac7d41bb66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.419865 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-kube-api-access-5jnf4" (OuterVolumeSpecName: "kube-api-access-5jnf4") pod "b90f51c3-20e7-46f3-af7f-05ac7d41bb66" (UID: "b90f51c3-20e7-46f3-af7f-05ac7d41bb66"). InnerVolumeSpecName "kube-api-access-5jnf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.479652 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b90f51c3-20e7-46f3-af7f-05ac7d41bb66" (UID: "b90f51c3-20e7-46f3-af7f-05ac7d41bb66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.515056 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jnf4\" (UniqueName: \"kubernetes.io/projected/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-kube-api-access-5jnf4\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.515086 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.515095 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90f51c3-20e7-46f3-af7f-05ac7d41bb66-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.560942 5058 generic.go:334] "Generic (PLEG): container finished" podID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerID="830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1" exitCode=0 Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.561002 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zknx" event={"ID":"b90f51c3-20e7-46f3-af7f-05ac7d41bb66","Type":"ContainerDied","Data":"830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1"} Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.561028 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zknx" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.561068 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zknx" event={"ID":"b90f51c3-20e7-46f3-af7f-05ac7d41bb66","Type":"ContainerDied","Data":"7d10c5bce4771ee145097a5227f47a2938f87cebe50e4b1c1abb3aad893c5328"} Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.561092 5058 scope.go:117] "RemoveContainer" containerID="830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.585080 5058 scope.go:117] "RemoveContainer" containerID="f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.588372 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7zknx"] Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.595600 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7zknx"] Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.616903 5058 scope.go:117] "RemoveContainer" containerID="4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.633056 5058 scope.go:117] "RemoveContainer" containerID="830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1" Oct 14 07:11:48 crc kubenswrapper[5058]: E1014 07:11:48.633448 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1\": container with ID starting with 830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1 not found: ID does not exist" containerID="830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.633491 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1"} err="failed to get container status \"830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1\": rpc error: code = NotFound desc = could not find container \"830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1\": container with ID starting with 830868b0fff42dea1732968388e26d35581a1a257dcc34df37b44bb81194dae1 not found: ID does not exist" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.633517 5058 scope.go:117] "RemoveContainer" containerID="f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6" Oct 14 07:11:48 crc kubenswrapper[5058]: E1014 07:11:48.633886 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6\": container with ID starting with f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6 not found: ID does not exist" containerID="f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.633921 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6"} err="failed to get container status \"f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6\": rpc error: code = NotFound desc = could not find container \"f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6\": container with ID starting with f7c0a1eb0c8e8dbf2a14c698a6b739ecfc9d716a5e5be5262f6486a80d40b9b6 not found: ID does not exist" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.633948 5058 scope.go:117] "RemoveContainer" containerID="4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026" Oct 14 07:11:48 crc kubenswrapper[5058]: E1014 07:11:48.634196 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026\": container with ID starting with 4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026 not found: ID does not exist" containerID="4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.634235 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026"} err="failed to get container status \"4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026\": rpc error: code = NotFound desc = could not find container \"4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026\": container with ID starting with 4572b7373966d16bf75ce75023106fb3d5ad575041a2e5de1717b6d22081f026 not found: ID does not exist" Oct 14 07:11:48 crc kubenswrapper[5058]: I1014 07:11:48.798873 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" path="/var/lib/kubelet/pods/b90f51c3-20e7-46f3-af7f-05ac7d41bb66/volumes" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.404145 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-465fw"] Oct 14 07:11:54 crc kubenswrapper[5058]: E1014 07:11:54.405144 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerName="registry-server" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.405160 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerName="registry-server" Oct 14 07:11:54 crc kubenswrapper[5058]: E1014 07:11:54.405188 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerName="extract-utilities" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.405197 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerName="extract-utilities" Oct 14 07:11:54 crc kubenswrapper[5058]: E1014 07:11:54.405210 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerName="extract-content" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.405219 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerName="extract-content" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.405438 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90f51c3-20e7-46f3-af7f-05ac7d41bb66" containerName="registry-server" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.406624 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.433884 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-465fw"] Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.502181 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-utilities\") pod \"certified-operators-465fw\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.502261 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpnd6\" (UniqueName: \"kubernetes.io/projected/98632d2f-897c-4530-8a0e-a59dd212da31-kube-api-access-lpnd6\") pod \"certified-operators-465fw\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.502343 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-catalog-content\") pod \"certified-operators-465fw\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.603338 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-utilities\") pod \"certified-operators-465fw\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.603414 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpnd6\" (UniqueName: \"kubernetes.io/projected/98632d2f-897c-4530-8a0e-a59dd212da31-kube-api-access-lpnd6\") pod \"certified-operators-465fw\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.603495 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-catalog-content\") pod \"certified-operators-465fw\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.604022 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-utilities\") pod \"certified-operators-465fw\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.604278 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-catalog-content\") pod \"certified-operators-465fw\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.629864 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpnd6\" (UniqueName: \"kubernetes.io/projected/98632d2f-897c-4530-8a0e-a59dd212da31-kube-api-access-lpnd6\") pod \"certified-operators-465fw\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:54 crc kubenswrapper[5058]: I1014 07:11:54.734596 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:11:55 crc kubenswrapper[5058]: I1014 07:11:55.212940 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-465fw"] Oct 14 07:11:55 crc kubenswrapper[5058]: I1014 07:11:55.637362 5058 generic.go:334] "Generic (PLEG): container finished" podID="98632d2f-897c-4530-8a0e-a59dd212da31" containerID="a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e" exitCode=0 Oct 14 07:11:55 crc kubenswrapper[5058]: I1014 07:11:55.637460 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-465fw" event={"ID":"98632d2f-897c-4530-8a0e-a59dd212da31","Type":"ContainerDied","Data":"a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e"} Oct 14 07:11:55 crc kubenswrapper[5058]: I1014 07:11:55.637944 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-465fw" event={"ID":"98632d2f-897c-4530-8a0e-a59dd212da31","Type":"ContainerStarted","Data":"dca2067ad2eceea931d3c74f59056b2fa44daf817ed5f467db4fe9351cd8ff66"} Oct 14 07:11:57 crc kubenswrapper[5058]: I1014 07:11:57.671407 5058 generic.go:334] "Generic (PLEG): container finished" podID="98632d2f-897c-4530-8a0e-a59dd212da31" containerID="177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9" exitCode=0 Oct 14 07:11:57 crc kubenswrapper[5058]: I1014 07:11:57.671488 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-465fw" event={"ID":"98632d2f-897c-4530-8a0e-a59dd212da31","Type":"ContainerDied","Data":"177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9"} Oct 14 07:11:58 crc kubenswrapper[5058]: I1014 07:11:58.683872 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-465fw" event={"ID":"98632d2f-897c-4530-8a0e-a59dd212da31","Type":"ContainerStarted","Data":"fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633"} Oct 14 07:11:58 crc kubenswrapper[5058]: I1014 07:11:58.705984 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-465fw" podStartSLOduration=2.259018764 podStartE2EDuration="4.705966887s" podCreationTimestamp="2025-10-14 07:11:54 +0000 UTC" firstStartedPulling="2025-10-14 07:11:55.641696426 +0000 UTC m=+1463.552780232" lastFinishedPulling="2025-10-14 07:11:58.088644549 +0000 UTC m=+1465.999728355" observedRunningTime="2025-10-14 07:11:58.705031351 +0000 UTC m=+1466.616115247" watchObservedRunningTime="2025-10-14 07:11:58.705966887 +0000 UTC m=+1466.617050693" Oct 14 07:12:04 crc kubenswrapper[5058]: I1014 07:12:04.735601 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:12:04 crc kubenswrapper[5058]: I1014 07:12:04.737559 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:12:04 crc kubenswrapper[5058]: I1014 07:12:04.810400 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:12:05 crc kubenswrapper[5058]: I1014 07:12:05.821133 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:12:05 crc kubenswrapper[5058]: I1014 07:12:05.878123 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-465fw"] Oct 14 07:12:07 crc kubenswrapper[5058]: I1014 07:12:07.774582 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-465fw" podUID="98632d2f-897c-4530-8a0e-a59dd212da31" containerName="registry-server" containerID="cri-o://fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633" gracePeriod=2 Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.322561 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.518101 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpnd6\" (UniqueName: \"kubernetes.io/projected/98632d2f-897c-4530-8a0e-a59dd212da31-kube-api-access-lpnd6\") pod \"98632d2f-897c-4530-8a0e-a59dd212da31\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.518282 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-catalog-content\") pod \"98632d2f-897c-4530-8a0e-a59dd212da31\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.518351 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-utilities\") pod \"98632d2f-897c-4530-8a0e-a59dd212da31\" (UID: \"98632d2f-897c-4530-8a0e-a59dd212da31\") " Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.519311 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-utilities" (OuterVolumeSpecName: "utilities") pod "98632d2f-897c-4530-8a0e-a59dd212da31" (UID: "98632d2f-897c-4530-8a0e-a59dd212da31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.529256 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98632d2f-897c-4530-8a0e-a59dd212da31-kube-api-access-lpnd6" (OuterVolumeSpecName: "kube-api-access-lpnd6") pod "98632d2f-897c-4530-8a0e-a59dd212da31" (UID: "98632d2f-897c-4530-8a0e-a59dd212da31"). InnerVolumeSpecName "kube-api-access-lpnd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.600258 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98632d2f-897c-4530-8a0e-a59dd212da31" (UID: "98632d2f-897c-4530-8a0e-a59dd212da31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.619968 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpnd6\" (UniqueName: \"kubernetes.io/projected/98632d2f-897c-4530-8a0e-a59dd212da31-kube-api-access-lpnd6\") on node \"crc\" DevicePath \"\"" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.620280 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.620429 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98632d2f-897c-4530-8a0e-a59dd212da31-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.793332 5058 generic.go:334] "Generic (PLEG): container finished" podID="98632d2f-897c-4530-8a0e-a59dd212da31" containerID="fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633" exitCode=0 Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.793409 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-465fw" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.803445 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-465fw" event={"ID":"98632d2f-897c-4530-8a0e-a59dd212da31","Type":"ContainerDied","Data":"fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633"} Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.803502 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-465fw" event={"ID":"98632d2f-897c-4530-8a0e-a59dd212da31","Type":"ContainerDied","Data":"dca2067ad2eceea931d3c74f59056b2fa44daf817ed5f467db4fe9351cd8ff66"} Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.803538 5058 scope.go:117] "RemoveContainer" containerID="fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.873905 5058 scope.go:117] "RemoveContainer" containerID="177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.877399 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-465fw"] Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.904933 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-465fw"] Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.931035 5058 scope.go:117] "RemoveContainer" containerID="a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.959275 5058 scope.go:117] "RemoveContainer" containerID="fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633" Oct 14 07:12:08 crc kubenswrapper[5058]: E1014 07:12:08.959826 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633\": container with ID starting with fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633 not found: ID does not exist" containerID="fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.959868 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633"} err="failed to get container status \"fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633\": rpc error: code = NotFound desc = could not find container \"fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633\": container with ID starting with fd5dc9bf94eb60afc16abd436859e319272f5a67f15b1853d9f8a1cd24b01633 not found: ID does not exist" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.959887 5058 scope.go:117] "RemoveContainer" containerID="177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9" Oct 14 07:12:08 crc kubenswrapper[5058]: E1014 07:12:08.960270 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9\": container with ID starting with 177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9 not found: ID does not exist" containerID="177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.960289 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9"} err="failed to get container status \"177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9\": rpc error: code = NotFound desc = could not find container \"177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9\": container with ID starting with 177cfb271b4952cf47e1242b52ca05e8f45c6fc9f9c425e91997d0d98cbccbc9 not found: ID does not exist" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.960301 5058 scope.go:117] "RemoveContainer" containerID="a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e" Oct 14 07:12:08 crc kubenswrapper[5058]: E1014 07:12:08.960614 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e\": container with ID starting with a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e not found: ID does not exist" containerID="a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e" Oct 14 07:12:08 crc kubenswrapper[5058]: I1014 07:12:08.960633 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e"} err="failed to get container status \"a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e\": rpc error: code = NotFound desc = could not find container \"a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e\": container with ID starting with a58c1f475f8d24deae17d87146799f359acd3c9408db9244eef1c82b6f7dbe7e not found: ID does not exist" Oct 14 07:12:10 crc kubenswrapper[5058]: I1014 07:12:10.802562 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98632d2f-897c-4530-8a0e-a59dd212da31" path="/var/lib/kubelet/pods/98632d2f-897c-4530-8a0e-a59dd212da31/volumes" Oct 14 07:12:33 crc kubenswrapper[5058]: I1014 07:12:33.656473 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:12:33 crc kubenswrapper[5058]: I1014 07:12:33.657191 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.404212 5058 scope.go:117] "RemoveContainer" containerID="662166b77c72efe18e0ab60702f8c96f49d147c5baff21546e4f166ab46b0225" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.435837 5058 scope.go:117] "RemoveContainer" containerID="52e19a73054745e7a98abc9f1aa91d3786a1f937f02d219f7e8db46c396014d4" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.483654 5058 scope.go:117] "RemoveContainer" containerID="e486cd47d1cab01f6ead3b605ca5d0c12f1f0966f60d7d5f68b287aaa532a10c" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.524722 5058 scope.go:117] "RemoveContainer" containerID="41500401a618a5fb16403d88a375426e53aded9cff72d89c8c0891e490e3bef3" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.568887 5058 scope.go:117] "RemoveContainer" containerID="a198fefec8d144e30064366b09f01772a3c0cf42d9352708a7015336d4c0e2d0" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.622995 5058 scope.go:117] "RemoveContainer" containerID="4d72dbf398feb91143056dfbb8ffff02c4e1ac77a9f3809301bc450498d01697" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.652822 5058 scope.go:117] "RemoveContainer" containerID="4ab24366db71622cf78f4f474ac5de29381375bdefa142cdd428db24843f358b" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.682223 5058 scope.go:117] "RemoveContainer" containerID="215a9a37d4d6e18d5aa8de66dd691d94290c736de1390cea3886c366d4223a76" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.718091 5058 scope.go:117] "RemoveContainer" containerID="be1f3befeb6a58f525d18137fbcd6f385642563af1087f8abe45a5caa94ac3e4" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.755995 5058 scope.go:117] "RemoveContainer" containerID="078270d91bea9cf71428743354ee59b1d8fa1ef35926a157089edeb6eb699e29" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.787855 5058 scope.go:117] "RemoveContainer" containerID="2f8dbfbad7c8bbb9c4315a71e9431f0677a4df9a5188f3ff63814f8bf788017e" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.819871 5058 scope.go:117] "RemoveContainer" containerID="6c1b6b6947fbcea5df829f8242ad5d14a9bbdb7a7102295008036d5e74029b73" Oct 14 07:12:35 crc kubenswrapper[5058]: I1014 07:12:35.850374 5058 scope.go:117] "RemoveContainer" containerID="0f272645c88a90b733a331db5ccb33b7b4626945cf20138c653e291f8cc2f38d" Oct 14 07:13:03 crc kubenswrapper[5058]: I1014 07:13:03.656678 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:13:03 crc kubenswrapper[5058]: I1014 07:13:03.657402 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:13:33 crc kubenswrapper[5058]: I1014 07:13:33.655921 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:13:33 crc kubenswrapper[5058]: I1014 07:13:33.656604 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:13:33 crc kubenswrapper[5058]: I1014 07:13:33.656662 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:13:33 crc kubenswrapper[5058]: I1014 07:13:33.657717 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:13:33 crc kubenswrapper[5058]: I1014 07:13:33.657854 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" gracePeriod=600 Oct 14 07:13:33 crc kubenswrapper[5058]: E1014 07:13:33.807896 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:13:34 crc kubenswrapper[5058]: I1014 07:13:34.760231 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" exitCode=0 Oct 14 07:13:34 crc kubenswrapper[5058]: I1014 07:13:34.760302 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5"} Oct 14 07:13:34 crc kubenswrapper[5058]: I1014 07:13:34.760365 5058 scope.go:117] "RemoveContainer" containerID="f6ea463aea20323602cc89ea6f1fb50e04214ddd78d3d4678e480df5c73c16fc" Oct 14 07:13:34 crc kubenswrapper[5058]: I1014 07:13:34.760742 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:13:34 crc kubenswrapper[5058]: E1014 07:13:34.760999 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:13:35 crc kubenswrapper[5058]: I1014 07:13:35.775379 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:13:35 crc kubenswrapper[5058]: E1014 07:13:35.776180 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.240217 5058 scope.go:117] "RemoveContainer" containerID="3ea1194fa8b6fad8061269f5174e14a7965ef29f4af784db1da27697323e14ad" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.271667 5058 scope.go:117] "RemoveContainer" containerID="1a2e702ac777627cc9b1f6f7b7f658646716149be400400fe2ddfa4f64dbf7eb" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.345548 5058 scope.go:117] "RemoveContainer" containerID="13f6ef3c2ee8fc26ccf62a297d8ebb59220986ea408bbf21d781dd0e3ecb304e" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.370867 5058 scope.go:117] "RemoveContainer" containerID="3bf89c61c5eddf7a8e956e671d562dd7749f03ce83b3c01ad37a890b11967d16" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.399969 5058 scope.go:117] "RemoveContainer" containerID="b42167829db655ee29d561f092b8f822ca55d825ad8a0a2d947e23d43b26c50a" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.462167 5058 scope.go:117] "RemoveContainer" containerID="0bb836c19aa15735f6bf1496006b97de7fede44e1beaf2c25a807476bc1dca66" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.526111 5058 scope.go:117] "RemoveContainer" containerID="af94d219d23122103b5302c378076785e59888115f6d1003e054e31c087d0d0c" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.574538 5058 scope.go:117] "RemoveContainer" containerID="4320f5f8bacdb0392539f3db341e4f1d4087365766c4007bfa9751bc8b9a17f9" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.602983 5058 scope.go:117] "RemoveContainer" containerID="e9c65bbef4be0b4564dab46e8756ca50ac118a12115c200ffc6633fac603b802" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.644528 5058 scope.go:117] "RemoveContainer" containerID="a397d6458dbbd90706435ced5073577a0541def8ab0088e116d3ecae79788296" Oct 14 07:13:36 crc kubenswrapper[5058]: I1014 07:13:36.698556 5058 scope.go:117] "RemoveContainer" containerID="2207b9755cce7061a8c0d946069da9de3f78f8089645b8f4c379126254f80a99" Oct 14 07:13:50 crc kubenswrapper[5058]: I1014 07:13:50.789942 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:13:50 crc kubenswrapper[5058]: E1014 07:13:50.791102 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:14:05 crc kubenswrapper[5058]: I1014 07:14:05.790606 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:14:05 crc kubenswrapper[5058]: E1014 07:14:05.791654 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:14:18 crc kubenswrapper[5058]: I1014 07:14:18.790688 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:14:18 crc kubenswrapper[5058]: E1014 07:14:18.791578 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:14:29 crc kubenswrapper[5058]: I1014 07:14:29.790085 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:14:29 crc kubenswrapper[5058]: E1014 07:14:29.790825 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:14:36 crc kubenswrapper[5058]: I1014 07:14:36.970281 5058 scope.go:117] "RemoveContainer" containerID="092714e5ca8fc048bf747f1e11af7e58ef585d7d510fd82008a0a96a1b8da12e" Oct 14 07:14:37 crc kubenswrapper[5058]: I1014 07:14:37.004999 5058 scope.go:117] "RemoveContainer" containerID="47bfcc71f584a9f3235d3d41f319f1ae35c0b61c540b1ac3eb73120782132327" Oct 14 07:14:37 crc kubenswrapper[5058]: I1014 07:14:37.059154 5058 scope.go:117] "RemoveContainer" containerID="2c1ca9810eab577f6b8f32d0611934c89232c17f52ff78d8d3748f5941c6275f" Oct 14 07:14:37 crc kubenswrapper[5058]: I1014 07:14:37.089587 5058 scope.go:117] "RemoveContainer" containerID="ccd63adf9ff2533f48ef01689c4ae84b7ad678fd3261c983558f6e60e740f9b0" Oct 14 07:14:37 crc kubenswrapper[5058]: I1014 07:14:37.132853 5058 scope.go:117] "RemoveContainer" containerID="a2c72a0b9b3c3872272f34176a0ed5176dff60e9f134143e49938e8d99253ac2" Oct 14 07:14:37 crc kubenswrapper[5058]: I1014 07:14:37.155052 5058 scope.go:117] "RemoveContainer" containerID="609b59e6d612b452795141ff2cf05dd02e639afa347efe89928750c54c555f15" Oct 14 07:14:37 crc kubenswrapper[5058]: I1014 07:14:37.189774 5058 scope.go:117] "RemoveContainer" containerID="b237b52ff6889669f24b31456f7208fe61fe597bad92f2b9b63ac597254f4eaf" Oct 14 07:14:44 crc kubenswrapper[5058]: I1014 07:14:44.791563 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:14:44 crc kubenswrapper[5058]: E1014 07:14:44.793223 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:14:57 crc kubenswrapper[5058]: I1014 07:14:57.789981 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:14:57 crc kubenswrapper[5058]: E1014 07:14:57.790835 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.171576 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs"] Oct 14 07:15:00 crc kubenswrapper[5058]: E1014 07:15:00.172833 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98632d2f-897c-4530-8a0e-a59dd212da31" containerName="extract-content" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.172924 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="98632d2f-897c-4530-8a0e-a59dd212da31" containerName="extract-content" Oct 14 07:15:00 crc kubenswrapper[5058]: E1014 07:15:00.172997 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98632d2f-897c-4530-8a0e-a59dd212da31" containerName="registry-server" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.173060 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="98632d2f-897c-4530-8a0e-a59dd212da31" containerName="registry-server" Oct 14 07:15:00 crc kubenswrapper[5058]: E1014 07:15:00.173123 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98632d2f-897c-4530-8a0e-a59dd212da31" containerName="extract-utilities" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.173190 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="98632d2f-897c-4530-8a0e-a59dd212da31" containerName="extract-utilities" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.173400 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="98632d2f-897c-4530-8a0e-a59dd212da31" containerName="registry-server" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.173952 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.178032 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.178577 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.182397 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs"] Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.294598 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjqq\" (UniqueName: \"kubernetes.io/projected/b17f8812-b143-4038-859b-1c4ca7d8ce2b-kube-api-access-fbjqq\") pod \"collect-profiles-29340435-pmqfs\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.294731 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b17f8812-b143-4038-859b-1c4ca7d8ce2b-secret-volume\") pod \"collect-profiles-29340435-pmqfs\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.294943 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b17f8812-b143-4038-859b-1c4ca7d8ce2b-config-volume\") pod \"collect-profiles-29340435-pmqfs\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.395978 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbjqq\" (UniqueName: \"kubernetes.io/projected/b17f8812-b143-4038-859b-1c4ca7d8ce2b-kube-api-access-fbjqq\") pod \"collect-profiles-29340435-pmqfs\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.396090 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b17f8812-b143-4038-859b-1c4ca7d8ce2b-secret-volume\") pod \"collect-profiles-29340435-pmqfs\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.396146 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b17f8812-b143-4038-859b-1c4ca7d8ce2b-config-volume\") pod \"collect-profiles-29340435-pmqfs\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.397411 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b17f8812-b143-4038-859b-1c4ca7d8ce2b-config-volume\") pod \"collect-profiles-29340435-pmqfs\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.402824 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b17f8812-b143-4038-859b-1c4ca7d8ce2b-secret-volume\") pod \"collect-profiles-29340435-pmqfs\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.412251 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbjqq\" (UniqueName: \"kubernetes.io/projected/b17f8812-b143-4038-859b-1c4ca7d8ce2b-kube-api-access-fbjqq\") pod \"collect-profiles-29340435-pmqfs\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.503041 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:00 crc kubenswrapper[5058]: I1014 07:15:00.960537 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs"] Oct 14 07:15:01 crc kubenswrapper[5058]: I1014 07:15:01.646846 5058 generic.go:334] "Generic (PLEG): container finished" podID="b17f8812-b143-4038-859b-1c4ca7d8ce2b" containerID="b73128a6dcd19f0af19b21a5c77f653f6f43aa1d0e1778d8ffc918836561bb24" exitCode=0 Oct 14 07:15:01 crc kubenswrapper[5058]: I1014 07:15:01.647199 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" event={"ID":"b17f8812-b143-4038-859b-1c4ca7d8ce2b","Type":"ContainerDied","Data":"b73128a6dcd19f0af19b21a5c77f653f6f43aa1d0e1778d8ffc918836561bb24"} Oct 14 07:15:01 crc kubenswrapper[5058]: I1014 07:15:01.647223 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" event={"ID":"b17f8812-b143-4038-859b-1c4ca7d8ce2b","Type":"ContainerStarted","Data":"1a9c3a7d9524ac60e40d15ef2067da90162246d70b6447b0627773009d324035"} Oct 14 07:15:02 crc kubenswrapper[5058]: I1014 07:15:02.982461 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.138727 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbjqq\" (UniqueName: \"kubernetes.io/projected/b17f8812-b143-4038-859b-1c4ca7d8ce2b-kube-api-access-fbjqq\") pod \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.139382 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b17f8812-b143-4038-859b-1c4ca7d8ce2b-config-volume\") pod \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.139646 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b17f8812-b143-4038-859b-1c4ca7d8ce2b-secret-volume\") pod \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\" (UID: \"b17f8812-b143-4038-859b-1c4ca7d8ce2b\") " Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.149404 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17f8812-b143-4038-859b-1c4ca7d8ce2b-kube-api-access-fbjqq" (OuterVolumeSpecName: "kube-api-access-fbjqq") pod "b17f8812-b143-4038-859b-1c4ca7d8ce2b" (UID: "b17f8812-b143-4038-859b-1c4ca7d8ce2b"). InnerVolumeSpecName "kube-api-access-fbjqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.150474 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17f8812-b143-4038-859b-1c4ca7d8ce2b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b17f8812-b143-4038-859b-1c4ca7d8ce2b" (UID: "b17f8812-b143-4038-859b-1c4ca7d8ce2b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.150726 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b17f8812-b143-4038-859b-1c4ca7d8ce2b-config-volume" (OuterVolumeSpecName: "config-volume") pod "b17f8812-b143-4038-859b-1c4ca7d8ce2b" (UID: "b17f8812-b143-4038-859b-1c4ca7d8ce2b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.242376 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b17f8812-b143-4038-859b-1c4ca7d8ce2b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.242460 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b17f8812-b143-4038-859b-1c4ca7d8ce2b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.242491 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbjqq\" (UniqueName: \"kubernetes.io/projected/b17f8812-b143-4038-859b-1c4ca7d8ce2b-kube-api-access-fbjqq\") on node \"crc\" DevicePath \"\"" Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.667920 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" event={"ID":"b17f8812-b143-4038-859b-1c4ca7d8ce2b","Type":"ContainerDied","Data":"1a9c3a7d9524ac60e40d15ef2067da90162246d70b6447b0627773009d324035"} Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.667994 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs" Oct 14 07:15:03 crc kubenswrapper[5058]: I1014 07:15:03.668641 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9c3a7d9524ac60e40d15ef2067da90162246d70b6447b0627773009d324035" Oct 14 07:15:11 crc kubenswrapper[5058]: I1014 07:15:11.790327 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:15:11 crc kubenswrapper[5058]: E1014 07:15:11.791535 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:15:25 crc kubenswrapper[5058]: I1014 07:15:25.789747 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:15:25 crc kubenswrapper[5058]: E1014 07:15:25.790952 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.806212 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-48j5l"] Oct 14 07:15:26 crc kubenswrapper[5058]: E1014 07:15:26.806593 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17f8812-b143-4038-859b-1c4ca7d8ce2b" containerName="collect-profiles" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.806611 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17f8812-b143-4038-859b-1c4ca7d8ce2b" containerName="collect-profiles" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.806855 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17f8812-b143-4038-859b-1c4ca7d8ce2b" containerName="collect-profiles" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.808453 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.812255 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48j5l"] Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.842848 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-utilities\") pod \"redhat-marketplace-48j5l\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.842927 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2528\" (UniqueName: \"kubernetes.io/projected/48cc3293-e516-4a91-b5d0-703de93b20d1-kube-api-access-b2528\") pod \"redhat-marketplace-48j5l\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.842958 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-catalog-content\") pod \"redhat-marketplace-48j5l\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.944037 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-utilities\") pod \"redhat-marketplace-48j5l\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.944112 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2528\" (UniqueName: \"kubernetes.io/projected/48cc3293-e516-4a91-b5d0-703de93b20d1-kube-api-access-b2528\") pod \"redhat-marketplace-48j5l\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.944138 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-catalog-content\") pod \"redhat-marketplace-48j5l\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.944572 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-utilities\") pod \"redhat-marketplace-48j5l\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.944858 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-catalog-content\") pod \"redhat-marketplace-48j5l\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:26 crc kubenswrapper[5058]: I1014 07:15:26.962521 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2528\" (UniqueName: \"kubernetes.io/projected/48cc3293-e516-4a91-b5d0-703de93b20d1-kube-api-access-b2528\") pod \"redhat-marketplace-48j5l\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:27 crc kubenswrapper[5058]: I1014 07:15:27.138054 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:27 crc kubenswrapper[5058]: I1014 07:15:27.372861 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48j5l"] Oct 14 07:15:27 crc kubenswrapper[5058]: I1014 07:15:27.921636 5058 generic.go:334] "Generic (PLEG): container finished" podID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerID="dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901" exitCode=0 Oct 14 07:15:27 crc kubenswrapper[5058]: I1014 07:15:27.921683 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48j5l" event={"ID":"48cc3293-e516-4a91-b5d0-703de93b20d1","Type":"ContainerDied","Data":"dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901"} Oct 14 07:15:27 crc kubenswrapper[5058]: I1014 07:15:27.922003 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48j5l" event={"ID":"48cc3293-e516-4a91-b5d0-703de93b20d1","Type":"ContainerStarted","Data":"303c261be934c98c237e765cac31ed08d497240acb1f07f27c764fa84615dc03"} Oct 14 07:15:29 crc kubenswrapper[5058]: I1014 07:15:29.938226 5058 generic.go:334] "Generic (PLEG): container finished" podID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerID="acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af" exitCode=0 Oct 14 07:15:29 crc kubenswrapper[5058]: I1014 07:15:29.938658 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48j5l" event={"ID":"48cc3293-e516-4a91-b5d0-703de93b20d1","Type":"ContainerDied","Data":"acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af"} Oct 14 07:15:31 crc kubenswrapper[5058]: I1014 07:15:31.856008 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q9h77"] Oct 14 07:15:31 crc kubenswrapper[5058]: I1014 07:15:31.857873 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:31 crc kubenswrapper[5058]: I1014 07:15:31.871525 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q9h77"] Oct 14 07:15:31 crc kubenswrapper[5058]: I1014 07:15:31.920875 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlbzj\" (UniqueName: \"kubernetes.io/projected/b015a5fd-c018-47e5-a163-b895953aadb2-kube-api-access-jlbzj\") pod \"community-operators-q9h77\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:31 crc kubenswrapper[5058]: I1014 07:15:31.921033 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-utilities\") pod \"community-operators-q9h77\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:31 crc kubenswrapper[5058]: I1014 07:15:31.921288 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-catalog-content\") pod \"community-operators-q9h77\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:31 crc kubenswrapper[5058]: I1014 07:15:31.961263 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48j5l" event={"ID":"48cc3293-e516-4a91-b5d0-703de93b20d1","Type":"ContainerStarted","Data":"70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e"} Oct 14 07:15:31 crc kubenswrapper[5058]: I1014 07:15:31.982600 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-48j5l" podStartSLOduration=2.876101126 podStartE2EDuration="5.98257994s" podCreationTimestamp="2025-10-14 07:15:26 +0000 UTC" firstStartedPulling="2025-10-14 07:15:27.923676432 +0000 UTC m=+1675.834760238" lastFinishedPulling="2025-10-14 07:15:31.030155246 +0000 UTC m=+1678.941239052" observedRunningTime="2025-10-14 07:15:31.978353081 +0000 UTC m=+1679.889436907" watchObservedRunningTime="2025-10-14 07:15:31.98257994 +0000 UTC m=+1679.893663746" Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.022296 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-utilities\") pod \"community-operators-q9h77\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.022381 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-catalog-content\") pod \"community-operators-q9h77\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.022445 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlbzj\" (UniqueName: \"kubernetes.io/projected/b015a5fd-c018-47e5-a163-b895953aadb2-kube-api-access-jlbzj\") pod \"community-operators-q9h77\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.022873 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-utilities\") pod \"community-operators-q9h77\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.022884 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-catalog-content\") pod \"community-operators-q9h77\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.050113 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlbzj\" (UniqueName: \"kubernetes.io/projected/b015a5fd-c018-47e5-a163-b895953aadb2-kube-api-access-jlbzj\") pod \"community-operators-q9h77\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.182822 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.731349 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q9h77"] Oct 14 07:15:32 crc kubenswrapper[5058]: W1014 07:15:32.732350 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb015a5fd_c018_47e5_a163_b895953aadb2.slice/crio-b417b1de86d2a7b5abd05a87dd427c5c32408a706893def94dd3d546d5c87662 WatchSource:0}: Error finding container b417b1de86d2a7b5abd05a87dd427c5c32408a706893def94dd3d546d5c87662: Status 404 returned error can't find the container with id b417b1de86d2a7b5abd05a87dd427c5c32408a706893def94dd3d546d5c87662 Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.977634 5058 generic.go:334] "Generic (PLEG): container finished" podID="b015a5fd-c018-47e5-a163-b895953aadb2" containerID="b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499" exitCode=0 Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.977691 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9h77" event={"ID":"b015a5fd-c018-47e5-a163-b895953aadb2","Type":"ContainerDied","Data":"b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499"} Oct 14 07:15:32 crc kubenswrapper[5058]: I1014 07:15:32.977981 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9h77" event={"ID":"b015a5fd-c018-47e5-a163-b895953aadb2","Type":"ContainerStarted","Data":"b417b1de86d2a7b5abd05a87dd427c5c32408a706893def94dd3d546d5c87662"} Oct 14 07:15:33 crc kubenswrapper[5058]: I1014 07:15:33.988174 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9h77" event={"ID":"b015a5fd-c018-47e5-a163-b895953aadb2","Type":"ContainerStarted","Data":"5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709"} Oct 14 07:15:35 crc kubenswrapper[5058]: I1014 07:15:35.004319 5058 generic.go:334] "Generic (PLEG): container finished" podID="b015a5fd-c018-47e5-a163-b895953aadb2" containerID="5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709" exitCode=0 Oct 14 07:15:35 crc kubenswrapper[5058]: I1014 07:15:35.004382 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9h77" event={"ID":"b015a5fd-c018-47e5-a163-b895953aadb2","Type":"ContainerDied","Data":"5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709"} Oct 14 07:15:36 crc kubenswrapper[5058]: I1014 07:15:36.021880 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9h77" event={"ID":"b015a5fd-c018-47e5-a163-b895953aadb2","Type":"ContainerStarted","Data":"ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015"} Oct 14 07:15:36 crc kubenswrapper[5058]: I1014 07:15:36.049373 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q9h77" podStartSLOduration=2.503731126 podStartE2EDuration="5.0493357s" podCreationTimestamp="2025-10-14 07:15:31 +0000 UTC" firstStartedPulling="2025-10-14 07:15:32.978880184 +0000 UTC m=+1680.889963990" lastFinishedPulling="2025-10-14 07:15:35.524484758 +0000 UTC m=+1683.435568564" observedRunningTime="2025-10-14 07:15:36.044964426 +0000 UTC m=+1683.956048332" watchObservedRunningTime="2025-10-14 07:15:36.0493357 +0000 UTC m=+1683.960419566" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.138265 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.138324 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.198533 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.334095 5058 scope.go:117] "RemoveContainer" containerID="10ec47490e6421a71c56dc9c4018cd51961f35995bc57eb573f4aaacaf119eea" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.354477 5058 scope.go:117] "RemoveContainer" containerID="eac297a1ab7de0fc6708883d0bb399fc2467a293069482118a7213d02ee1169d" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.383826 5058 scope.go:117] "RemoveContainer" containerID="dd8d9264e90400aef2e4a4308ff7a28d4739a18cf3ccd1fa4028410913e9f4cb" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.426415 5058 scope.go:117] "RemoveContainer" containerID="89aab696d2006e8237245cc838e2d37a49d39b388091b004bb1bf280d2d579a7" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.465029 5058 scope.go:117] "RemoveContainer" containerID="f166a159ddc4b5e5bff41474766e07018bec2ed39bfa6abf6e45d3859d643ef4" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.484232 5058 scope.go:117] "RemoveContainer" containerID="a7097745ea8c09cfe21aa321a8f31a6de450e38a41d2d22287ccb498aca9574d" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.526730 5058 scope.go:117] "RemoveContainer" containerID="4d2ca79f27d42a301586f6dcab92f4bec5bd1dd7f66d712c73c46b4eceb65bf8" Oct 14 07:15:37 crc kubenswrapper[5058]: I1014 07:15:37.550747 5058 scope.go:117] "RemoveContainer" containerID="70e91a24551b0735af960c2cc36eeff8a125d876bf6da56d8d6d8fff049d0832" Oct 14 07:15:38 crc kubenswrapper[5058]: I1014 07:15:38.120337 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:39 crc kubenswrapper[5058]: I1014 07:15:39.355362 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48j5l"] Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.058055 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-48j5l" podUID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerName="registry-server" containerID="cri-o://70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e" gracePeriod=2 Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.538743 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.654504 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2528\" (UniqueName: \"kubernetes.io/projected/48cc3293-e516-4a91-b5d0-703de93b20d1-kube-api-access-b2528\") pod \"48cc3293-e516-4a91-b5d0-703de93b20d1\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.654685 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-utilities\") pod \"48cc3293-e516-4a91-b5d0-703de93b20d1\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.654741 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-catalog-content\") pod \"48cc3293-e516-4a91-b5d0-703de93b20d1\" (UID: \"48cc3293-e516-4a91-b5d0-703de93b20d1\") " Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.656634 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-utilities" (OuterVolumeSpecName: "utilities") pod "48cc3293-e516-4a91-b5d0-703de93b20d1" (UID: "48cc3293-e516-4a91-b5d0-703de93b20d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.664193 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cc3293-e516-4a91-b5d0-703de93b20d1-kube-api-access-b2528" (OuterVolumeSpecName: "kube-api-access-b2528") pod "48cc3293-e516-4a91-b5d0-703de93b20d1" (UID: "48cc3293-e516-4a91-b5d0-703de93b20d1"). InnerVolumeSpecName "kube-api-access-b2528". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.685209 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48cc3293-e516-4a91-b5d0-703de93b20d1" (UID: "48cc3293-e516-4a91-b5d0-703de93b20d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.756303 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2528\" (UniqueName: \"kubernetes.io/projected/48cc3293-e516-4a91-b5d0-703de93b20d1-kube-api-access-b2528\") on node \"crc\" DevicePath \"\"" Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.756332 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.756340 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48cc3293-e516-4a91-b5d0-703de93b20d1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:15:40 crc kubenswrapper[5058]: I1014 07:15:40.791200 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:15:40 crc kubenswrapper[5058]: E1014 07:15:40.791484 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.073628 5058 generic.go:334] "Generic (PLEG): container finished" podID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerID="70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e" exitCode=0 Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.073687 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48j5l" event={"ID":"48cc3293-e516-4a91-b5d0-703de93b20d1","Type":"ContainerDied","Data":"70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e"} Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.073725 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48j5l" event={"ID":"48cc3293-e516-4a91-b5d0-703de93b20d1","Type":"ContainerDied","Data":"303c261be934c98c237e765cac31ed08d497240acb1f07f27c764fa84615dc03"} Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.073752 5058 scope.go:117] "RemoveContainer" containerID="70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.073756 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48j5l" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.102589 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48j5l"] Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.110178 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-48j5l"] Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.113866 5058 scope.go:117] "RemoveContainer" containerID="acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.132299 5058 scope.go:117] "RemoveContainer" containerID="dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.178591 5058 scope.go:117] "RemoveContainer" containerID="70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e" Oct 14 07:15:41 crc kubenswrapper[5058]: E1014 07:15:41.179465 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e\": container with ID starting with 70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e not found: ID does not exist" containerID="70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.179505 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e"} err="failed to get container status \"70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e\": rpc error: code = NotFound desc = could not find container \"70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e\": container with ID starting with 70bc20453139c5e147914d57689b4116aca9db4f74684fb9c373a82e0cd4238e not found: ID does not exist" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.179528 5058 scope.go:117] "RemoveContainer" containerID="acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af" Oct 14 07:15:41 crc kubenswrapper[5058]: E1014 07:15:41.179964 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af\": container with ID starting with acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af not found: ID does not exist" containerID="acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.180001 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af"} err="failed to get container status \"acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af\": rpc error: code = NotFound desc = could not find container \"acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af\": container with ID starting with acff0ab1adcf2563d10527d69750e07831e4efef88e6b0b3f91ea5d732ded0af not found: ID does not exist" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.180025 5058 scope.go:117] "RemoveContainer" containerID="dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901" Oct 14 07:15:41 crc kubenswrapper[5058]: E1014 07:15:41.180471 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901\": container with ID starting with dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901 not found: ID does not exist" containerID="dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901" Oct 14 07:15:41 crc kubenswrapper[5058]: I1014 07:15:41.180502 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901"} err="failed to get container status \"dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901\": rpc error: code = NotFound desc = could not find container \"dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901\": container with ID starting with dca353f9f1c746a255c91c133930329898c344c60189c6f54f4c7562d3866901 not found: ID does not exist" Oct 14 07:15:42 crc kubenswrapper[5058]: I1014 07:15:42.184598 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:42 crc kubenswrapper[5058]: I1014 07:15:42.185185 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:42 crc kubenswrapper[5058]: I1014 07:15:42.245263 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:42 crc kubenswrapper[5058]: I1014 07:15:42.808233 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48cc3293-e516-4a91-b5d0-703de93b20d1" path="/var/lib/kubelet/pods/48cc3293-e516-4a91-b5d0-703de93b20d1/volumes" Oct 14 07:15:43 crc kubenswrapper[5058]: I1014 07:15:43.175396 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:43 crc kubenswrapper[5058]: I1014 07:15:43.749363 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q9h77"] Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.110548 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q9h77" podUID="b015a5fd-c018-47e5-a163-b895953aadb2" containerName="registry-server" containerID="cri-o://ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015" gracePeriod=2 Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.584066 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.633876 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlbzj\" (UniqueName: \"kubernetes.io/projected/b015a5fd-c018-47e5-a163-b895953aadb2-kube-api-access-jlbzj\") pod \"b015a5fd-c018-47e5-a163-b895953aadb2\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.633930 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-utilities\") pod \"b015a5fd-c018-47e5-a163-b895953aadb2\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.633957 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-catalog-content\") pod \"b015a5fd-c018-47e5-a163-b895953aadb2\" (UID: \"b015a5fd-c018-47e5-a163-b895953aadb2\") " Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.635015 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-utilities" (OuterVolumeSpecName: "utilities") pod "b015a5fd-c018-47e5-a163-b895953aadb2" (UID: "b015a5fd-c018-47e5-a163-b895953aadb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.635518 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.643343 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b015a5fd-c018-47e5-a163-b895953aadb2-kube-api-access-jlbzj" (OuterVolumeSpecName: "kube-api-access-jlbzj") pod "b015a5fd-c018-47e5-a163-b895953aadb2" (UID: "b015a5fd-c018-47e5-a163-b895953aadb2"). InnerVolumeSpecName "kube-api-access-jlbzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.681274 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b015a5fd-c018-47e5-a163-b895953aadb2" (UID: "b015a5fd-c018-47e5-a163-b895953aadb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.737538 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlbzj\" (UniqueName: \"kubernetes.io/projected/b015a5fd-c018-47e5-a163-b895953aadb2-kube-api-access-jlbzj\") on node \"crc\" DevicePath \"\"" Oct 14 07:15:45 crc kubenswrapper[5058]: I1014 07:15:45.737591 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b015a5fd-c018-47e5-a163-b895953aadb2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.123260 5058 generic.go:334] "Generic (PLEG): container finished" podID="b015a5fd-c018-47e5-a163-b895953aadb2" containerID="ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015" exitCode=0 Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.123324 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9h77" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.123323 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9h77" event={"ID":"b015a5fd-c018-47e5-a163-b895953aadb2","Type":"ContainerDied","Data":"ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015"} Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.123533 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9h77" event={"ID":"b015a5fd-c018-47e5-a163-b895953aadb2","Type":"ContainerDied","Data":"b417b1de86d2a7b5abd05a87dd427c5c32408a706893def94dd3d546d5c87662"} Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.123565 5058 scope.go:117] "RemoveContainer" containerID="ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.166903 5058 scope.go:117] "RemoveContainer" containerID="5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.171839 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q9h77"] Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.181472 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q9h77"] Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.192924 5058 scope.go:117] "RemoveContainer" containerID="b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.240927 5058 scope.go:117] "RemoveContainer" containerID="ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015" Oct 14 07:15:46 crc kubenswrapper[5058]: E1014 07:15:46.241578 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015\": container with ID starting with ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015 not found: ID does not exist" containerID="ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.241607 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015"} err="failed to get container status \"ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015\": rpc error: code = NotFound desc = could not find container \"ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015\": container with ID starting with ad6d74379c185daa84839feba3c09a3d26d2e559e2c63d43cca27ee5b4efb015 not found: ID does not exist" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.241630 5058 scope.go:117] "RemoveContainer" containerID="5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709" Oct 14 07:15:46 crc kubenswrapper[5058]: E1014 07:15:46.242148 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709\": container with ID starting with 5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709 not found: ID does not exist" containerID="5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.242169 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709"} err="failed to get container status \"5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709\": rpc error: code = NotFound desc = could not find container \"5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709\": container with ID starting with 5cf033ebbfd06c53da80cdbf459c0c078326c9226e2a525b44cc0f03e09bb709 not found: ID does not exist" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.242181 5058 scope.go:117] "RemoveContainer" containerID="b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499" Oct 14 07:15:46 crc kubenswrapper[5058]: E1014 07:15:46.242705 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499\": container with ID starting with b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499 not found: ID does not exist" containerID="b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.242727 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499"} err="failed to get container status \"b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499\": rpc error: code = NotFound desc = could not find container \"b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499\": container with ID starting with b217d813a25136c8e53c8ed926b6fd378dcba2dcb4e49114e373d6b8a85f7499 not found: ID does not exist" Oct 14 07:15:46 crc kubenswrapper[5058]: I1014 07:15:46.808206 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b015a5fd-c018-47e5-a163-b895953aadb2" path="/var/lib/kubelet/pods/b015a5fd-c018-47e5-a163-b895953aadb2/volumes" Oct 14 07:15:55 crc kubenswrapper[5058]: I1014 07:15:55.790626 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:15:55 crc kubenswrapper[5058]: E1014 07:15:55.791683 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:16:08 crc kubenswrapper[5058]: I1014 07:16:08.791177 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:16:08 crc kubenswrapper[5058]: E1014 07:16:08.792404 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:16:22 crc kubenswrapper[5058]: I1014 07:16:22.797938 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:16:22 crc kubenswrapper[5058]: E1014 07:16:22.798852 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:16:33 crc kubenswrapper[5058]: I1014 07:16:33.789996 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:16:33 crc kubenswrapper[5058]: E1014 07:16:33.791030 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:16:37 crc kubenswrapper[5058]: I1014 07:16:37.664608 5058 scope.go:117] "RemoveContainer" containerID="2182c2e8effad0274964ae00b87d07ae2b2c94468e266f8b6506ca45b077abb0" Oct 14 07:16:37 crc kubenswrapper[5058]: I1014 07:16:37.699973 5058 scope.go:117] "RemoveContainer" containerID="b5ced10e8e9dc43761eac804b7a8665e7ec213ce5e5cabd7f244dd3594e1902c" Oct 14 07:16:37 crc kubenswrapper[5058]: I1014 07:16:37.736836 5058 scope.go:117] "RemoveContainer" containerID="f4ac6e8b23e28b50b8cab485e5f74d65c0c304c304bce434d559be066174b01f" Oct 14 07:16:37 crc kubenswrapper[5058]: I1014 07:16:37.810600 5058 scope.go:117] "RemoveContainer" containerID="e4d7518b85947d26c6171f1eab2950b6d3234d20c8173fbf3e81eb1f4e60fb15" Oct 14 07:16:37 crc kubenswrapper[5058]: I1014 07:16:37.839670 5058 scope.go:117] "RemoveContainer" containerID="2f46d25b77e87986761969ede4c3b2a2be4d314e5f92ff4d72e882b69e2161e3" Oct 14 07:16:47 crc kubenswrapper[5058]: I1014 07:16:47.790375 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:16:47 crc kubenswrapper[5058]: E1014 07:16:47.791406 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:16:58 crc kubenswrapper[5058]: I1014 07:16:58.790242 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:16:58 crc kubenswrapper[5058]: E1014 07:16:58.791099 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:17:11 crc kubenswrapper[5058]: I1014 07:17:11.790111 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:17:11 crc kubenswrapper[5058]: E1014 07:17:11.791123 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:17:25 crc kubenswrapper[5058]: I1014 07:17:25.789626 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:17:25 crc kubenswrapper[5058]: E1014 07:17:25.790530 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:17:37 crc kubenswrapper[5058]: I1014 07:17:37.790310 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:17:37 crc kubenswrapper[5058]: E1014 07:17:37.791336 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:17:50 crc kubenswrapper[5058]: I1014 07:17:50.790601 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:17:50 crc kubenswrapper[5058]: E1014 07:17:50.791493 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:18:02 crc kubenswrapper[5058]: I1014 07:18:02.793325 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:18:02 crc kubenswrapper[5058]: E1014 07:18:02.794234 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:18:17 crc kubenswrapper[5058]: I1014 07:18:17.789891 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:18:17 crc kubenswrapper[5058]: E1014 07:18:17.790733 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:18:29 crc kubenswrapper[5058]: I1014 07:18:29.789549 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:18:29 crc kubenswrapper[5058]: E1014 07:18:29.790632 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:18:44 crc kubenswrapper[5058]: I1014 07:18:44.790940 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:18:45 crc kubenswrapper[5058]: I1014 07:18:45.695675 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"6b316e1064745ca8eb5777803679e01b7bfddd71c1f1c003240826376e061aaf"} Oct 14 07:21:03 crc kubenswrapper[5058]: I1014 07:21:03.656244 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:21:03 crc kubenswrapper[5058]: I1014 07:21:03.656878 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:21:33 crc kubenswrapper[5058]: I1014 07:21:33.656242 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:21:33 crc kubenswrapper[5058]: I1014 07:21:33.656926 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:22:03 crc kubenswrapper[5058]: I1014 07:22:03.656440 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:22:03 crc kubenswrapper[5058]: I1014 07:22:03.657036 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:22:03 crc kubenswrapper[5058]: I1014 07:22:03.657096 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:22:03 crc kubenswrapper[5058]: I1014 07:22:03.658031 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b316e1064745ca8eb5777803679e01b7bfddd71c1f1c003240826376e061aaf"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:22:03 crc kubenswrapper[5058]: I1014 07:22:03.658134 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://6b316e1064745ca8eb5777803679e01b7bfddd71c1f1c003240826376e061aaf" gracePeriod=600 Oct 14 07:22:04 crc kubenswrapper[5058]: I1014 07:22:04.385171 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="6b316e1064745ca8eb5777803679e01b7bfddd71c1f1c003240826376e061aaf" exitCode=0 Oct 14 07:22:04 crc kubenswrapper[5058]: I1014 07:22:04.385226 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"6b316e1064745ca8eb5777803679e01b7bfddd71c1f1c003240826376e061aaf"} Oct 14 07:22:04 crc kubenswrapper[5058]: I1014 07:22:04.385511 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799"} Oct 14 07:22:04 crc kubenswrapper[5058]: I1014 07:22:04.385532 5058 scope.go:117] "RemoveContainer" containerID="a185a0e91cb1c63e2ecf2487995003330146ad34084af72413df1d2e9bffabe5" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.162036 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b8x2m"] Oct 14 07:22:18 crc kubenswrapper[5058]: E1014 07:22:18.162910 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b015a5fd-c018-47e5-a163-b895953aadb2" containerName="registry-server" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.162927 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b015a5fd-c018-47e5-a163-b895953aadb2" containerName="registry-server" Oct 14 07:22:18 crc kubenswrapper[5058]: E1014 07:22:18.162947 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b015a5fd-c018-47e5-a163-b895953aadb2" containerName="extract-utilities" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.162955 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b015a5fd-c018-47e5-a163-b895953aadb2" containerName="extract-utilities" Oct 14 07:22:18 crc kubenswrapper[5058]: E1014 07:22:18.162967 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerName="extract-utilities" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.162975 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerName="extract-utilities" Oct 14 07:22:18 crc kubenswrapper[5058]: E1014 07:22:18.162989 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerName="extract-content" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.162996 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerName="extract-content" Oct 14 07:22:18 crc kubenswrapper[5058]: E1014 07:22:18.163011 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerName="registry-server" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.163018 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerName="registry-server" Oct 14 07:22:18 crc kubenswrapper[5058]: E1014 07:22:18.163042 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b015a5fd-c018-47e5-a163-b895953aadb2" containerName="extract-content" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.163051 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b015a5fd-c018-47e5-a163-b895953aadb2" containerName="extract-content" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.163211 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b015a5fd-c018-47e5-a163-b895953aadb2" containerName="registry-server" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.163232 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cc3293-e516-4a91-b5d0-703de93b20d1" containerName="registry-server" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.164412 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.173526 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8x2m"] Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.366995 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9xj8\" (UniqueName: \"kubernetes.io/projected/4097b0b2-6727-4ae6-8448-00d0be590446-kube-api-access-q9xj8\") pod \"redhat-operators-b8x2m\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.367068 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-catalog-content\") pod \"redhat-operators-b8x2m\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.367140 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-utilities\") pod \"redhat-operators-b8x2m\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.468300 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9xj8\" (UniqueName: \"kubernetes.io/projected/4097b0b2-6727-4ae6-8448-00d0be590446-kube-api-access-q9xj8\") pod \"redhat-operators-b8x2m\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.468353 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-catalog-content\") pod \"redhat-operators-b8x2m\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.468401 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-utilities\") pod \"redhat-operators-b8x2m\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.468913 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-catalog-content\") pod \"redhat-operators-b8x2m\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.469033 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-utilities\") pod \"redhat-operators-b8x2m\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.491466 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9xj8\" (UniqueName: \"kubernetes.io/projected/4097b0b2-6727-4ae6-8448-00d0be590446-kube-api-access-q9xj8\") pod \"redhat-operators-b8x2m\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:18 crc kubenswrapper[5058]: I1014 07:22:18.783233 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:19 crc kubenswrapper[5058]: I1014 07:22:19.208294 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8x2m"] Oct 14 07:22:19 crc kubenswrapper[5058]: I1014 07:22:19.526323 5058 generic.go:334] "Generic (PLEG): container finished" podID="4097b0b2-6727-4ae6-8448-00d0be590446" containerID="7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea" exitCode=0 Oct 14 07:22:19 crc kubenswrapper[5058]: I1014 07:22:19.526473 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8x2m" event={"ID":"4097b0b2-6727-4ae6-8448-00d0be590446","Type":"ContainerDied","Data":"7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea"} Oct 14 07:22:19 crc kubenswrapper[5058]: I1014 07:22:19.526613 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8x2m" event={"ID":"4097b0b2-6727-4ae6-8448-00d0be590446","Type":"ContainerStarted","Data":"979dbb1a7f2ac3dcaebc9e3baf3e909bea5f753afdf71d25264762b985cacad0"} Oct 14 07:22:19 crc kubenswrapper[5058]: I1014 07:22:19.527995 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.360933 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4kxfx"] Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.368864 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.378721 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kxfx"] Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.501010 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-catalog-content\") pod \"certified-operators-4kxfx\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.501102 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-utilities\") pod \"certified-operators-4kxfx\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.501138 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44hrb\" (UniqueName: \"kubernetes.io/projected/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-kube-api-access-44hrb\") pod \"certified-operators-4kxfx\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.534437 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8x2m" event={"ID":"4097b0b2-6727-4ae6-8448-00d0be590446","Type":"ContainerStarted","Data":"06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1"} Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.602526 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-utilities\") pod \"certified-operators-4kxfx\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.602599 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44hrb\" (UniqueName: \"kubernetes.io/projected/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-kube-api-access-44hrb\") pod \"certified-operators-4kxfx\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.602645 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-catalog-content\") pod \"certified-operators-4kxfx\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.603173 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-catalog-content\") pod \"certified-operators-4kxfx\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.603438 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-utilities\") pod \"certified-operators-4kxfx\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.622656 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44hrb\" (UniqueName: \"kubernetes.io/projected/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-kube-api-access-44hrb\") pod \"certified-operators-4kxfx\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:20 crc kubenswrapper[5058]: I1014 07:22:20.723276 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:21 crc kubenswrapper[5058]: I1014 07:22:21.027356 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kxfx"] Oct 14 07:22:21 crc kubenswrapper[5058]: W1014 07:22:21.061848 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbe8e4b7_0db0_45a8_b671_a6ab1472aea3.slice/crio-e7ed15f8fef01094d5ef1a588aa8d7b3423f487c00e78391604f93a7f4d5a13c WatchSource:0}: Error finding container e7ed15f8fef01094d5ef1a588aa8d7b3423f487c00e78391604f93a7f4d5a13c: Status 404 returned error can't find the container with id e7ed15f8fef01094d5ef1a588aa8d7b3423f487c00e78391604f93a7f4d5a13c Oct 14 07:22:21 crc kubenswrapper[5058]: E1014 07:22:21.326035 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbe8e4b7_0db0_45a8_b671_a6ab1472aea3.slice/crio-conmon-2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b.scope\": RecentStats: unable to find data in memory cache]" Oct 14 07:22:21 crc kubenswrapper[5058]: I1014 07:22:21.543417 5058 generic.go:334] "Generic (PLEG): container finished" podID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerID="2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b" exitCode=0 Oct 14 07:22:21 crc kubenswrapper[5058]: I1014 07:22:21.543559 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kxfx" event={"ID":"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3","Type":"ContainerDied","Data":"2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b"} Oct 14 07:22:21 crc kubenswrapper[5058]: I1014 07:22:21.545106 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kxfx" event={"ID":"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3","Type":"ContainerStarted","Data":"e7ed15f8fef01094d5ef1a588aa8d7b3423f487c00e78391604f93a7f4d5a13c"} Oct 14 07:22:21 crc kubenswrapper[5058]: I1014 07:22:21.548378 5058 generic.go:334] "Generic (PLEG): container finished" podID="4097b0b2-6727-4ae6-8448-00d0be590446" containerID="06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1" exitCode=0 Oct 14 07:22:21 crc kubenswrapper[5058]: I1014 07:22:21.548420 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8x2m" event={"ID":"4097b0b2-6727-4ae6-8448-00d0be590446","Type":"ContainerDied","Data":"06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1"} Oct 14 07:22:22 crc kubenswrapper[5058]: I1014 07:22:22.557415 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kxfx" event={"ID":"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3","Type":"ContainerStarted","Data":"0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94"} Oct 14 07:22:22 crc kubenswrapper[5058]: I1014 07:22:22.559859 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8x2m" event={"ID":"4097b0b2-6727-4ae6-8448-00d0be590446","Type":"ContainerStarted","Data":"55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b"} Oct 14 07:22:22 crc kubenswrapper[5058]: I1014 07:22:22.620037 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b8x2m" podStartSLOduration=2.1964437390000002 podStartE2EDuration="4.620014999s" podCreationTimestamp="2025-10-14 07:22:18 +0000 UTC" firstStartedPulling="2025-10-14 07:22:19.527814485 +0000 UTC m=+2087.438898291" lastFinishedPulling="2025-10-14 07:22:21.951385755 +0000 UTC m=+2089.862469551" observedRunningTime="2025-10-14 07:22:22.614861324 +0000 UTC m=+2090.525945160" watchObservedRunningTime="2025-10-14 07:22:22.620014999 +0000 UTC m=+2090.531098815" Oct 14 07:22:23 crc kubenswrapper[5058]: I1014 07:22:23.568439 5058 generic.go:334] "Generic (PLEG): container finished" podID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerID="0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94" exitCode=0 Oct 14 07:22:23 crc kubenswrapper[5058]: I1014 07:22:23.568579 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kxfx" event={"ID":"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3","Type":"ContainerDied","Data":"0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94"} Oct 14 07:22:24 crc kubenswrapper[5058]: I1014 07:22:24.577712 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kxfx" event={"ID":"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3","Type":"ContainerStarted","Data":"73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6"} Oct 14 07:22:24 crc kubenswrapper[5058]: I1014 07:22:24.600379 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4kxfx" podStartSLOduration=2.052023386 podStartE2EDuration="4.600352996s" podCreationTimestamp="2025-10-14 07:22:20 +0000 UTC" firstStartedPulling="2025-10-14 07:22:21.546994705 +0000 UTC m=+2089.458078521" lastFinishedPulling="2025-10-14 07:22:24.095324325 +0000 UTC m=+2092.006408131" observedRunningTime="2025-10-14 07:22:24.592257148 +0000 UTC m=+2092.503340964" watchObservedRunningTime="2025-10-14 07:22:24.600352996 +0000 UTC m=+2092.511436812" Oct 14 07:22:28 crc kubenswrapper[5058]: I1014 07:22:28.783421 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:28 crc kubenswrapper[5058]: I1014 07:22:28.783927 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:28 crc kubenswrapper[5058]: I1014 07:22:28.827978 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:29 crc kubenswrapper[5058]: I1014 07:22:29.663502 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:29 crc kubenswrapper[5058]: I1014 07:22:29.722418 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8x2m"] Oct 14 07:22:30 crc kubenswrapper[5058]: I1014 07:22:30.724190 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:30 crc kubenswrapper[5058]: I1014 07:22:30.725577 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:30 crc kubenswrapper[5058]: I1014 07:22:30.761494 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:31 crc kubenswrapper[5058]: I1014 07:22:31.627744 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b8x2m" podUID="4097b0b2-6727-4ae6-8448-00d0be590446" containerName="registry-server" containerID="cri-o://55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b" gracePeriod=2 Oct 14 07:22:31 crc kubenswrapper[5058]: I1014 07:22:31.712030 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.461772 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kxfx"] Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.507891 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.638242 5058 generic.go:334] "Generic (PLEG): container finished" podID="4097b0b2-6727-4ae6-8448-00d0be590446" containerID="55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b" exitCode=0 Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.638280 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8x2m" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.638319 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8x2m" event={"ID":"4097b0b2-6727-4ae6-8448-00d0be590446","Type":"ContainerDied","Data":"55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b"} Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.638415 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8x2m" event={"ID":"4097b0b2-6727-4ae6-8448-00d0be590446","Type":"ContainerDied","Data":"979dbb1a7f2ac3dcaebc9e3baf3e909bea5f753afdf71d25264762b985cacad0"} Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.638441 5058 scope.go:117] "RemoveContainer" containerID="55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.660315 5058 scope.go:117] "RemoveContainer" containerID="06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.681669 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-catalog-content\") pod \"4097b0b2-6727-4ae6-8448-00d0be590446\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.681988 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9xj8\" (UniqueName: \"kubernetes.io/projected/4097b0b2-6727-4ae6-8448-00d0be590446-kube-api-access-q9xj8\") pod \"4097b0b2-6727-4ae6-8448-00d0be590446\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.682148 5058 scope.go:117] "RemoveContainer" containerID="7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.682169 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-utilities\") pod \"4097b0b2-6727-4ae6-8448-00d0be590446\" (UID: \"4097b0b2-6727-4ae6-8448-00d0be590446\") " Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.683232 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-utilities" (OuterVolumeSpecName: "utilities") pod "4097b0b2-6727-4ae6-8448-00d0be590446" (UID: "4097b0b2-6727-4ae6-8448-00d0be590446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.685157 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.690021 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4097b0b2-6727-4ae6-8448-00d0be590446-kube-api-access-q9xj8" (OuterVolumeSpecName: "kube-api-access-q9xj8") pod "4097b0b2-6727-4ae6-8448-00d0be590446" (UID: "4097b0b2-6727-4ae6-8448-00d0be590446"). InnerVolumeSpecName "kube-api-access-q9xj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.723732 5058 scope.go:117] "RemoveContainer" containerID="55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b" Oct 14 07:22:32 crc kubenswrapper[5058]: E1014 07:22:32.724213 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b\": container with ID starting with 55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b not found: ID does not exist" containerID="55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.724280 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b"} err="failed to get container status \"55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b\": rpc error: code = NotFound desc = could not find container \"55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b\": container with ID starting with 55e037c2e479440314a47ed5f3744fd70fde5b471b9098eb02970a7d3804469b not found: ID does not exist" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.724313 5058 scope.go:117] "RemoveContainer" containerID="06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1" Oct 14 07:22:32 crc kubenswrapper[5058]: E1014 07:22:32.724644 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1\": container with ID starting with 06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1 not found: ID does not exist" containerID="06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.724674 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1"} err="failed to get container status \"06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1\": rpc error: code = NotFound desc = could not find container \"06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1\": container with ID starting with 06c0f341e1bac3ffa442dd2274436826fe54120d94a5a42e83429f2631daa3e1 not found: ID does not exist" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.724903 5058 scope.go:117] "RemoveContainer" containerID="7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea" Oct 14 07:22:32 crc kubenswrapper[5058]: E1014 07:22:32.725171 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea\": container with ID starting with 7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea not found: ID does not exist" containerID="7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.725191 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea"} err="failed to get container status \"7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea\": rpc error: code = NotFound desc = could not find container \"7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea\": container with ID starting with 7bf01bae735b862685696bf5548a0a0134bf48dba117738239f00ca1774cfdea not found: ID does not exist" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.778410 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4097b0b2-6727-4ae6-8448-00d0be590446" (UID: "4097b0b2-6727-4ae6-8448-00d0be590446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.785989 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4097b0b2-6727-4ae6-8448-00d0be590446-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.786027 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9xj8\" (UniqueName: \"kubernetes.io/projected/4097b0b2-6727-4ae6-8448-00d0be590446-kube-api-access-q9xj8\") on node \"crc\" DevicePath \"\"" Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.958398 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8x2m"] Oct 14 07:22:32 crc kubenswrapper[5058]: I1014 07:22:32.963310 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b8x2m"] Oct 14 07:22:33 crc kubenswrapper[5058]: I1014 07:22:33.648986 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4kxfx" podUID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerName="registry-server" containerID="cri-o://73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6" gracePeriod=2 Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.048664 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.207736 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44hrb\" (UniqueName: \"kubernetes.io/projected/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-kube-api-access-44hrb\") pod \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.208185 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-utilities\") pod \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.208220 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-catalog-content\") pod \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\" (UID: \"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3\") " Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.209425 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-utilities" (OuterVolumeSpecName: "utilities") pod "cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" (UID: "cbe8e4b7-0db0-45a8-b671-a6ab1472aea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.213928 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-kube-api-access-44hrb" (OuterVolumeSpecName: "kube-api-access-44hrb") pod "cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" (UID: "cbe8e4b7-0db0-45a8-b671-a6ab1472aea3"). InnerVolumeSpecName "kube-api-access-44hrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.258600 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" (UID: "cbe8e4b7-0db0-45a8-b671-a6ab1472aea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.309229 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.309273 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.309288 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44hrb\" (UniqueName: \"kubernetes.io/projected/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3-kube-api-access-44hrb\") on node \"crc\" DevicePath \"\"" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.657918 5058 generic.go:334] "Generic (PLEG): container finished" podID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerID="73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6" exitCode=0 Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.657967 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kxfx" event={"ID":"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3","Type":"ContainerDied","Data":"73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6"} Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.658028 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kxfx" event={"ID":"cbe8e4b7-0db0-45a8-b671-a6ab1472aea3","Type":"ContainerDied","Data":"e7ed15f8fef01094d5ef1a588aa8d7b3423f487c00e78391604f93a7f4d5a13c"} Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.657986 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kxfx" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.658091 5058 scope.go:117] "RemoveContainer" containerID="73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.678096 5058 scope.go:117] "RemoveContainer" containerID="0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.692319 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kxfx"] Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.697682 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4kxfx"] Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.714305 5058 scope.go:117] "RemoveContainer" containerID="2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.731083 5058 scope.go:117] "RemoveContainer" containerID="73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6" Oct 14 07:22:34 crc kubenswrapper[5058]: E1014 07:22:34.733183 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6\": container with ID starting with 73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6 not found: ID does not exist" containerID="73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.733248 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6"} err="failed to get container status \"73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6\": rpc error: code = NotFound desc = could not find container \"73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6\": container with ID starting with 73ae2cfee8547028229a77d5882e57a120b8d90bdde071a526f33eaa853754b6 not found: ID does not exist" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.733276 5058 scope.go:117] "RemoveContainer" containerID="0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94" Oct 14 07:22:34 crc kubenswrapper[5058]: E1014 07:22:34.735332 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94\": container with ID starting with 0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94 not found: ID does not exist" containerID="0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.735366 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94"} err="failed to get container status \"0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94\": rpc error: code = NotFound desc = could not find container \"0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94\": container with ID starting with 0a997c9084d7f1fb61d70e6112a5a71420b0a113b1ccdf8d5ac1465087990f94 not found: ID does not exist" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.735389 5058 scope.go:117] "RemoveContainer" containerID="2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b" Oct 14 07:22:34 crc kubenswrapper[5058]: E1014 07:22:34.735699 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b\": container with ID starting with 2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b not found: ID does not exist" containerID="2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.735721 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b"} err="failed to get container status \"2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b\": rpc error: code = NotFound desc = could not find container \"2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b\": container with ID starting with 2604a5932d1464f9df20db4af71cb5dc7c22df4d33946e9ffcae8ef2d9b13a2b not found: ID does not exist" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.801256 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4097b0b2-6727-4ae6-8448-00d0be590446" path="/var/lib/kubelet/pods/4097b0b2-6727-4ae6-8448-00d0be590446/volumes" Oct 14 07:22:34 crc kubenswrapper[5058]: I1014 07:22:34.801916 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" path="/var/lib/kubelet/pods/cbe8e4b7-0db0-45a8-b671-a6ab1472aea3/volumes" Oct 14 07:24:03 crc kubenswrapper[5058]: I1014 07:24:03.656444 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:24:03 crc kubenswrapper[5058]: I1014 07:24:03.657214 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:24:33 crc kubenswrapper[5058]: I1014 07:24:33.656232 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:24:33 crc kubenswrapper[5058]: I1014 07:24:33.656828 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:25:03 crc kubenswrapper[5058]: I1014 07:25:03.656368 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:25:03 crc kubenswrapper[5058]: I1014 07:25:03.657075 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:25:03 crc kubenswrapper[5058]: I1014 07:25:03.657136 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:25:03 crc kubenswrapper[5058]: I1014 07:25:03.657938 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:25:03 crc kubenswrapper[5058]: I1014 07:25:03.658004 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" gracePeriod=600 Oct 14 07:25:03 crc kubenswrapper[5058]: E1014 07:25:03.789258 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:25:03 crc kubenswrapper[5058]: I1014 07:25:03.834090 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" exitCode=0 Oct 14 07:25:03 crc kubenswrapper[5058]: I1014 07:25:03.834156 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799"} Oct 14 07:25:03 crc kubenswrapper[5058]: I1014 07:25:03.834205 5058 scope.go:117] "RemoveContainer" containerID="6b316e1064745ca8eb5777803679e01b7bfddd71c1f1c003240826376e061aaf" Oct 14 07:25:03 crc kubenswrapper[5058]: I1014 07:25:03.834867 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:25:03 crc kubenswrapper[5058]: E1014 07:25:03.835134 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:25:19 crc kubenswrapper[5058]: I1014 07:25:19.790484 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:25:19 crc kubenswrapper[5058]: E1014 07:25:19.791840 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:25:34 crc kubenswrapper[5058]: I1014 07:25:34.790729 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:25:34 crc kubenswrapper[5058]: E1014 07:25:34.791672 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.644833 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pn8vx"] Oct 14 07:25:35 crc kubenswrapper[5058]: E1014 07:25:35.645696 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4097b0b2-6727-4ae6-8448-00d0be590446" containerName="registry-server" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.645718 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4097b0b2-6727-4ae6-8448-00d0be590446" containerName="registry-server" Oct 14 07:25:35 crc kubenswrapper[5058]: E1014 07:25:35.645757 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4097b0b2-6727-4ae6-8448-00d0be590446" containerName="extract-utilities" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.645768 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4097b0b2-6727-4ae6-8448-00d0be590446" containerName="extract-utilities" Oct 14 07:25:35 crc kubenswrapper[5058]: E1014 07:25:35.645791 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4097b0b2-6727-4ae6-8448-00d0be590446" containerName="extract-content" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.645825 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4097b0b2-6727-4ae6-8448-00d0be590446" containerName="extract-content" Oct 14 07:25:35 crc kubenswrapper[5058]: E1014 07:25:35.645850 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerName="extract-utilities" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.645861 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerName="extract-utilities" Oct 14 07:25:35 crc kubenswrapper[5058]: E1014 07:25:35.645877 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerName="extract-content" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.645887 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerName="extract-content" Oct 14 07:25:35 crc kubenswrapper[5058]: E1014 07:25:35.645911 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerName="registry-server" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.645922 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerName="registry-server" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.646179 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe8e4b7-0db0-45a8-b671-a6ab1472aea3" containerName="registry-server" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.646210 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4097b0b2-6727-4ae6-8448-00d0be590446" containerName="registry-server" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.653833 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.667750 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn8vx"] Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.856298 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-catalog-content\") pod \"community-operators-pn8vx\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.856354 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-utilities\") pod \"community-operators-pn8vx\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.856387 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9v6c\" (UniqueName: \"kubernetes.io/projected/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-kube-api-access-w9v6c\") pod \"community-operators-pn8vx\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.958122 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-utilities\") pod \"community-operators-pn8vx\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.958215 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9v6c\" (UniqueName: \"kubernetes.io/projected/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-kube-api-access-w9v6c\") pod \"community-operators-pn8vx\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.958349 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-catalog-content\") pod \"community-operators-pn8vx\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.958994 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-catalog-content\") pod \"community-operators-pn8vx\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.959194 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-utilities\") pod \"community-operators-pn8vx\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:35 crc kubenswrapper[5058]: I1014 07:25:35.977561 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9v6c\" (UniqueName: \"kubernetes.io/projected/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-kube-api-access-w9v6c\") pod \"community-operators-pn8vx\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:36 crc kubenswrapper[5058]: I1014 07:25:36.275096 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:36 crc kubenswrapper[5058]: I1014 07:25:36.727213 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn8vx"] Oct 14 07:25:37 crc kubenswrapper[5058]: I1014 07:25:37.154877 5058 generic.go:334] "Generic (PLEG): container finished" podID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerID="3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449" exitCode=0 Oct 14 07:25:37 crc kubenswrapper[5058]: I1014 07:25:37.154947 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn8vx" event={"ID":"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a","Type":"ContainerDied","Data":"3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449"} Oct 14 07:25:37 crc kubenswrapper[5058]: I1014 07:25:37.155291 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn8vx" event={"ID":"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a","Type":"ContainerStarted","Data":"f5f0de65503b3f8cce720e086adc619cd228b4f4a48624e6264eaa6c74232ced"} Oct 14 07:25:38 crc kubenswrapper[5058]: I1014 07:25:38.164604 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn8vx" event={"ID":"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a","Type":"ContainerStarted","Data":"117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031"} Oct 14 07:25:39 crc kubenswrapper[5058]: I1014 07:25:39.174895 5058 generic.go:334] "Generic (PLEG): container finished" podID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerID="117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031" exitCode=0 Oct 14 07:25:39 crc kubenswrapper[5058]: I1014 07:25:39.174943 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn8vx" event={"ID":"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a","Type":"ContainerDied","Data":"117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031"} Oct 14 07:25:40 crc kubenswrapper[5058]: I1014 07:25:40.188355 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn8vx" event={"ID":"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a","Type":"ContainerStarted","Data":"b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f"} Oct 14 07:25:40 crc kubenswrapper[5058]: I1014 07:25:40.210014 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pn8vx" podStartSLOduration=2.7475295539999998 podStartE2EDuration="5.209991927s" podCreationTimestamp="2025-10-14 07:25:35 +0000 UTC" firstStartedPulling="2025-10-14 07:25:37.156989281 +0000 UTC m=+2285.068073137" lastFinishedPulling="2025-10-14 07:25:39.619451704 +0000 UTC m=+2287.530535510" observedRunningTime="2025-10-14 07:25:40.20869516 +0000 UTC m=+2288.119778966" watchObservedRunningTime="2025-10-14 07:25:40.209991927 +0000 UTC m=+2288.121075733" Oct 14 07:25:46 crc kubenswrapper[5058]: I1014 07:25:46.276310 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:46 crc kubenswrapper[5058]: I1014 07:25:46.277046 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:46 crc kubenswrapper[5058]: I1014 07:25:46.348598 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:47 crc kubenswrapper[5058]: I1014 07:25:47.335672 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:47 crc kubenswrapper[5058]: I1014 07:25:47.408947 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pn8vx"] Oct 14 07:25:48 crc kubenswrapper[5058]: I1014 07:25:48.791240 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:25:48 crc kubenswrapper[5058]: E1014 07:25:48.792075 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.282048 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pn8vx" podUID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerName="registry-server" containerID="cri-o://b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f" gracePeriod=2 Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.709681 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.869724 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-catalog-content\") pod \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.869895 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-utilities\") pod \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.869999 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9v6c\" (UniqueName: \"kubernetes.io/projected/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-kube-api-access-w9v6c\") pod \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\" (UID: \"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a\") " Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.871871 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-utilities" (OuterVolumeSpecName: "utilities") pod "a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" (UID: "a06cc77b-a1a0-4d58-9c25-3ff46e409c9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.880492 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-kube-api-access-w9v6c" (OuterVolumeSpecName: "kube-api-access-w9v6c") pod "a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" (UID: "a06cc77b-a1a0-4d58-9c25-3ff46e409c9a"). InnerVolumeSpecName "kube-api-access-w9v6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.958608 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" (UID: "a06cc77b-a1a0-4d58-9c25-3ff46e409c9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.973772 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.973832 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9v6c\" (UniqueName: \"kubernetes.io/projected/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-kube-api-access-w9v6c\") on node \"crc\" DevicePath \"\"" Oct 14 07:25:49 crc kubenswrapper[5058]: I1014 07:25:49.973843 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.294552 5058 generic.go:334] "Generic (PLEG): container finished" podID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerID="b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f" exitCode=0 Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.294607 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn8vx" event={"ID":"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a","Type":"ContainerDied","Data":"b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f"} Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.294660 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn8vx" event={"ID":"a06cc77b-a1a0-4d58-9c25-3ff46e409c9a","Type":"ContainerDied","Data":"f5f0de65503b3f8cce720e086adc619cd228b4f4a48624e6264eaa6c74232ced"} Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.294691 5058 scope.go:117] "RemoveContainer" containerID="b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.294865 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn8vx" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.337498 5058 scope.go:117] "RemoveContainer" containerID="117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.349184 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pn8vx"] Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.356086 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pn8vx"] Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.367587 5058 scope.go:117] "RemoveContainer" containerID="3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.415271 5058 scope.go:117] "RemoveContainer" containerID="b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f" Oct 14 07:25:50 crc kubenswrapper[5058]: E1014 07:25:50.416127 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f\": container with ID starting with b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f not found: ID does not exist" containerID="b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.416190 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f"} err="failed to get container status \"b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f\": rpc error: code = NotFound desc = could not find container \"b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f\": container with ID starting with b54a45049b2dab6531f8224e358b8261ccace7a1f3696133f1140e168b047c0f not found: ID does not exist" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.416232 5058 scope.go:117] "RemoveContainer" containerID="117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031" Oct 14 07:25:50 crc kubenswrapper[5058]: E1014 07:25:50.416674 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031\": container with ID starting with 117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031 not found: ID does not exist" containerID="117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.416728 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031"} err="failed to get container status \"117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031\": rpc error: code = NotFound desc = could not find container \"117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031\": container with ID starting with 117d03924419a02b25c8dc9b4bae3d4257626b614cf4f2365e1c624f6b677031 not found: ID does not exist" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.416762 5058 scope.go:117] "RemoveContainer" containerID="3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449" Oct 14 07:25:50 crc kubenswrapper[5058]: E1014 07:25:50.417345 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449\": container with ID starting with 3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449 not found: ID does not exist" containerID="3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.417398 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449"} err="failed to get container status \"3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449\": rpc error: code = NotFound desc = could not find container \"3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449\": container with ID starting with 3de2c9f29a2a4509826158e61ece41c679d4cf40eb16394ea5b8a160e0c34449 not found: ID does not exist" Oct 14 07:25:50 crc kubenswrapper[5058]: I1014 07:25:50.802120 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" path="/var/lib/kubelet/pods/a06cc77b-a1a0-4d58-9c25-3ff46e409c9a/volumes" Oct 14 07:25:59 crc kubenswrapper[5058]: I1014 07:25:59.791240 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:25:59 crc kubenswrapper[5058]: E1014 07:25:59.792544 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.035096 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-25f8x"] Oct 14 07:26:05 crc kubenswrapper[5058]: E1014 07:26:05.047884 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerName="extract-utilities" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.047927 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerName="extract-utilities" Oct 14 07:26:05 crc kubenswrapper[5058]: E1014 07:26:05.047973 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerName="extract-content" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.047982 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerName="extract-content" Oct 14 07:26:05 crc kubenswrapper[5058]: E1014 07:26:05.048004 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerName="registry-server" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.048012 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerName="registry-server" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.048316 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a06cc77b-a1a0-4d58-9c25-3ff46e409c9a" containerName="registry-server" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.057367 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-25f8x"] Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.057840 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.195153 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-utilities\") pod \"redhat-marketplace-25f8x\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.195253 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-catalog-content\") pod \"redhat-marketplace-25f8x\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.195346 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvwk\" (UniqueName: \"kubernetes.io/projected/eff6e81a-c688-464f-9087-b2069d864031-kube-api-access-8vvwk\") pod \"redhat-marketplace-25f8x\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.296365 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvwk\" (UniqueName: \"kubernetes.io/projected/eff6e81a-c688-464f-9087-b2069d864031-kube-api-access-8vvwk\") pod \"redhat-marketplace-25f8x\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.296507 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-utilities\") pod \"redhat-marketplace-25f8x\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.296566 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-catalog-content\") pod \"redhat-marketplace-25f8x\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.297120 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-utilities\") pod \"redhat-marketplace-25f8x\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.297244 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-catalog-content\") pod \"redhat-marketplace-25f8x\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.324667 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvwk\" (UniqueName: \"kubernetes.io/projected/eff6e81a-c688-464f-9087-b2069d864031-kube-api-access-8vvwk\") pod \"redhat-marketplace-25f8x\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.389518 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:05 crc kubenswrapper[5058]: I1014 07:26:05.826390 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-25f8x"] Oct 14 07:26:05 crc kubenswrapper[5058]: W1014 07:26:05.843197 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff6e81a_c688_464f_9087_b2069d864031.slice/crio-21d1711c91d60760d50f2d94e812346eed26cae5457bb696476e9338d39ff20c WatchSource:0}: Error finding container 21d1711c91d60760d50f2d94e812346eed26cae5457bb696476e9338d39ff20c: Status 404 returned error can't find the container with id 21d1711c91d60760d50f2d94e812346eed26cae5457bb696476e9338d39ff20c Oct 14 07:26:06 crc kubenswrapper[5058]: I1014 07:26:06.438246 5058 generic.go:334] "Generic (PLEG): container finished" podID="eff6e81a-c688-464f-9087-b2069d864031" containerID="431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e" exitCode=0 Oct 14 07:26:06 crc kubenswrapper[5058]: I1014 07:26:06.438447 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25f8x" event={"ID":"eff6e81a-c688-464f-9087-b2069d864031","Type":"ContainerDied","Data":"431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e"} Oct 14 07:26:06 crc kubenswrapper[5058]: I1014 07:26:06.438862 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25f8x" event={"ID":"eff6e81a-c688-464f-9087-b2069d864031","Type":"ContainerStarted","Data":"21d1711c91d60760d50f2d94e812346eed26cae5457bb696476e9338d39ff20c"} Oct 14 07:26:07 crc kubenswrapper[5058]: I1014 07:26:07.447676 5058 generic.go:334] "Generic (PLEG): container finished" podID="eff6e81a-c688-464f-9087-b2069d864031" containerID="c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb" exitCode=0 Oct 14 07:26:07 crc kubenswrapper[5058]: I1014 07:26:07.447988 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25f8x" event={"ID":"eff6e81a-c688-464f-9087-b2069d864031","Type":"ContainerDied","Data":"c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb"} Oct 14 07:26:08 crc kubenswrapper[5058]: I1014 07:26:08.459927 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25f8x" event={"ID":"eff6e81a-c688-464f-9087-b2069d864031","Type":"ContainerStarted","Data":"5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee"} Oct 14 07:26:08 crc kubenswrapper[5058]: I1014 07:26:08.486474 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-25f8x" podStartSLOduration=1.916149775 podStartE2EDuration="3.486422385s" podCreationTimestamp="2025-10-14 07:26:05 +0000 UTC" firstStartedPulling="2025-10-14 07:26:06.441013221 +0000 UTC m=+2314.352097057" lastFinishedPulling="2025-10-14 07:26:08.011285841 +0000 UTC m=+2315.922369667" observedRunningTime="2025-10-14 07:26:08.482131754 +0000 UTC m=+2316.393215570" watchObservedRunningTime="2025-10-14 07:26:08.486422385 +0000 UTC m=+2316.397506201" Oct 14 07:26:14 crc kubenswrapper[5058]: I1014 07:26:14.790476 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:26:14 crc kubenswrapper[5058]: E1014 07:26:14.791287 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:26:15 crc kubenswrapper[5058]: I1014 07:26:15.390616 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:15 crc kubenswrapper[5058]: I1014 07:26:15.391025 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:15 crc kubenswrapper[5058]: I1014 07:26:15.447855 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:15 crc kubenswrapper[5058]: I1014 07:26:15.578823 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:15 crc kubenswrapper[5058]: I1014 07:26:15.684855 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-25f8x"] Oct 14 07:26:17 crc kubenswrapper[5058]: I1014 07:26:17.540188 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-25f8x" podUID="eff6e81a-c688-464f-9087-b2069d864031" containerName="registry-server" containerID="cri-o://5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee" gracePeriod=2 Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.043618 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.191862 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-catalog-content\") pod \"eff6e81a-c688-464f-9087-b2069d864031\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.192204 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vvwk\" (UniqueName: \"kubernetes.io/projected/eff6e81a-c688-464f-9087-b2069d864031-kube-api-access-8vvwk\") pod \"eff6e81a-c688-464f-9087-b2069d864031\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.192380 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-utilities\") pod \"eff6e81a-c688-464f-9087-b2069d864031\" (UID: \"eff6e81a-c688-464f-9087-b2069d864031\") " Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.194053 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-utilities" (OuterVolumeSpecName: "utilities") pod "eff6e81a-c688-464f-9087-b2069d864031" (UID: "eff6e81a-c688-464f-9087-b2069d864031"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.201068 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff6e81a-c688-464f-9087-b2069d864031-kube-api-access-8vvwk" (OuterVolumeSpecName: "kube-api-access-8vvwk") pod "eff6e81a-c688-464f-9087-b2069d864031" (UID: "eff6e81a-c688-464f-9087-b2069d864031"). InnerVolumeSpecName "kube-api-access-8vvwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.205602 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eff6e81a-c688-464f-9087-b2069d864031" (UID: "eff6e81a-c688-464f-9087-b2069d864031"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.294151 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.294201 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6e81a-c688-464f-9087-b2069d864031-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.294222 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vvwk\" (UniqueName: \"kubernetes.io/projected/eff6e81a-c688-464f-9087-b2069d864031-kube-api-access-8vvwk\") on node \"crc\" DevicePath \"\"" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.551897 5058 generic.go:334] "Generic (PLEG): container finished" podID="eff6e81a-c688-464f-9087-b2069d864031" containerID="5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee" exitCode=0 Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.551958 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25f8x" event={"ID":"eff6e81a-c688-464f-9087-b2069d864031","Type":"ContainerDied","Data":"5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee"} Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.551999 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25f8x" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.552032 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25f8x" event={"ID":"eff6e81a-c688-464f-9087-b2069d864031","Type":"ContainerDied","Data":"21d1711c91d60760d50f2d94e812346eed26cae5457bb696476e9338d39ff20c"} Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.552088 5058 scope.go:117] "RemoveContainer" containerID="5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.591875 5058 scope.go:117] "RemoveContainer" containerID="c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.593503 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-25f8x"] Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.607618 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-25f8x"] Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.614238 5058 scope.go:117] "RemoveContainer" containerID="431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.661317 5058 scope.go:117] "RemoveContainer" containerID="5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee" Oct 14 07:26:18 crc kubenswrapper[5058]: E1014 07:26:18.662217 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee\": container with ID starting with 5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee not found: ID does not exist" containerID="5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.662268 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee"} err="failed to get container status \"5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee\": rpc error: code = NotFound desc = could not find container \"5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee\": container with ID starting with 5627797b9faae621008f8e57f64f66f006282a4b9dd93d43a4a2764afcf0a4ee not found: ID does not exist" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.662294 5058 scope.go:117] "RemoveContainer" containerID="c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb" Oct 14 07:26:18 crc kubenswrapper[5058]: E1014 07:26:18.662578 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb\": container with ID starting with c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb not found: ID does not exist" containerID="c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.662627 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb"} err="failed to get container status \"c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb\": rpc error: code = NotFound desc = could not find container \"c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb\": container with ID starting with c4fde4bcf5c7fec1c28e388740c1addf33975226aedf00913b94b260dd241acb not found: ID does not exist" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.662663 5058 scope.go:117] "RemoveContainer" containerID="431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e" Oct 14 07:26:18 crc kubenswrapper[5058]: E1014 07:26:18.663130 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e\": container with ID starting with 431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e not found: ID does not exist" containerID="431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.663155 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e"} err="failed to get container status \"431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e\": rpc error: code = NotFound desc = could not find container \"431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e\": container with ID starting with 431ea531a3e0b7d01266b5541417edbbe6001ed6e45ff07f6969075c16cb6e0e not found: ID does not exist" Oct 14 07:26:18 crc kubenswrapper[5058]: I1014 07:26:18.809407 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff6e81a-c688-464f-9087-b2069d864031" path="/var/lib/kubelet/pods/eff6e81a-c688-464f-9087-b2069d864031/volumes" Oct 14 07:26:25 crc kubenswrapper[5058]: I1014 07:26:25.790125 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:26:25 crc kubenswrapper[5058]: E1014 07:26:25.791500 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:26:39 crc kubenswrapper[5058]: I1014 07:26:39.791038 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:26:39 crc kubenswrapper[5058]: E1014 07:26:39.791970 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:26:52 crc kubenswrapper[5058]: I1014 07:26:52.795019 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:26:52 crc kubenswrapper[5058]: E1014 07:26:52.797498 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:27:05 crc kubenswrapper[5058]: I1014 07:27:05.790139 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:27:05 crc kubenswrapper[5058]: E1014 07:27:05.791109 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:27:17 crc kubenswrapper[5058]: I1014 07:27:17.789990 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:27:17 crc kubenswrapper[5058]: E1014 07:27:17.790850 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:27:32 crc kubenswrapper[5058]: I1014 07:27:32.789958 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:27:32 crc kubenswrapper[5058]: E1014 07:27:32.790729 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:27:46 crc kubenswrapper[5058]: I1014 07:27:46.789965 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:27:46 crc kubenswrapper[5058]: E1014 07:27:46.790969 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:27:57 crc kubenswrapper[5058]: I1014 07:27:57.789483 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:27:57 crc kubenswrapper[5058]: E1014 07:27:57.790316 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:28:12 crc kubenswrapper[5058]: I1014 07:28:12.799553 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:28:12 crc kubenswrapper[5058]: E1014 07:28:12.800333 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:28:24 crc kubenswrapper[5058]: I1014 07:28:24.791019 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:28:24 crc kubenswrapper[5058]: E1014 07:28:24.793024 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:28:39 crc kubenswrapper[5058]: I1014 07:28:39.790933 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:28:39 crc kubenswrapper[5058]: E1014 07:28:39.791650 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:28:50 crc kubenswrapper[5058]: I1014 07:28:50.790590 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:28:50 crc kubenswrapper[5058]: E1014 07:28:50.791402 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:29:02 crc kubenswrapper[5058]: I1014 07:29:02.797407 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:29:02 crc kubenswrapper[5058]: E1014 07:29:02.798152 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:29:13 crc kubenswrapper[5058]: I1014 07:29:13.790196 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:29:13 crc kubenswrapper[5058]: E1014 07:29:13.790658 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:29:26 crc kubenswrapper[5058]: I1014 07:29:26.790435 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:29:26 crc kubenswrapper[5058]: E1014 07:29:26.791466 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:29:41 crc kubenswrapper[5058]: I1014 07:29:41.790720 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:29:41 crc kubenswrapper[5058]: E1014 07:29:41.791916 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:29:55 crc kubenswrapper[5058]: I1014 07:29:55.790919 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:29:55 crc kubenswrapper[5058]: E1014 07:29:55.793465 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.160694 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm"] Oct 14 07:30:00 crc kubenswrapper[5058]: E1014 07:30:00.161436 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff6e81a-c688-464f-9087-b2069d864031" containerName="extract-content" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.161455 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff6e81a-c688-464f-9087-b2069d864031" containerName="extract-content" Oct 14 07:30:00 crc kubenswrapper[5058]: E1014 07:30:00.161479 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff6e81a-c688-464f-9087-b2069d864031" containerName="extract-utilities" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.161507 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff6e81a-c688-464f-9087-b2069d864031" containerName="extract-utilities" Oct 14 07:30:00 crc kubenswrapper[5058]: E1014 07:30:00.161527 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff6e81a-c688-464f-9087-b2069d864031" containerName="registry-server" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.161535 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff6e81a-c688-464f-9087-b2069d864031" containerName="registry-server" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.161711 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff6e81a-c688-464f-9087-b2069d864031" containerName="registry-server" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.162333 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.164940 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.165098 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.179143 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm"] Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.215395 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-config-volume\") pod \"collect-profiles-29340450-2fngm\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.215469 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-secret-volume\") pod \"collect-profiles-29340450-2fngm\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.215538 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88wb\" (UniqueName: \"kubernetes.io/projected/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-kube-api-access-l88wb\") pod \"collect-profiles-29340450-2fngm\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.316500 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l88wb\" (UniqueName: \"kubernetes.io/projected/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-kube-api-access-l88wb\") pod \"collect-profiles-29340450-2fngm\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.316933 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-config-volume\") pod \"collect-profiles-29340450-2fngm\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.317076 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-secret-volume\") pod \"collect-profiles-29340450-2fngm\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.319197 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-config-volume\") pod \"collect-profiles-29340450-2fngm\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.328579 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-secret-volume\") pod \"collect-profiles-29340450-2fngm\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.336546 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l88wb\" (UniqueName: \"kubernetes.io/projected/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-kube-api-access-l88wb\") pod \"collect-profiles-29340450-2fngm\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.485222 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:00 crc kubenswrapper[5058]: I1014 07:30:00.913887 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm"] Oct 14 07:30:01 crc kubenswrapper[5058]: I1014 07:30:01.507138 5058 generic.go:334] "Generic (PLEG): container finished" podID="77e31dbf-0ae6-4cb1-a98a-8181f91dbad2" containerID="233be8a73a8db64a75752c45428dddd58d7bf366a152bcc671df57c0a0ad6640" exitCode=0 Oct 14 07:30:01 crc kubenswrapper[5058]: I1014 07:30:01.507220 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" event={"ID":"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2","Type":"ContainerDied","Data":"233be8a73a8db64a75752c45428dddd58d7bf366a152bcc671df57c0a0ad6640"} Oct 14 07:30:01 crc kubenswrapper[5058]: I1014 07:30:01.507505 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" event={"ID":"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2","Type":"ContainerStarted","Data":"122c73005315535bc71182d3f7ec26b35920666b92e9d4744a5f9f93536047d8"} Oct 14 07:30:02 crc kubenswrapper[5058]: I1014 07:30:02.884028 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:02 crc kubenswrapper[5058]: I1014 07:30:02.955009 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-config-volume\") pod \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " Oct 14 07:30:02 crc kubenswrapper[5058]: I1014 07:30:02.955115 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l88wb\" (UniqueName: \"kubernetes.io/projected/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-kube-api-access-l88wb\") pod \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " Oct 14 07:30:02 crc kubenswrapper[5058]: I1014 07:30:02.955189 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-secret-volume\") pod \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\" (UID: \"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2\") " Oct 14 07:30:02 crc kubenswrapper[5058]: I1014 07:30:02.955850 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-config-volume" (OuterVolumeSpecName: "config-volume") pod "77e31dbf-0ae6-4cb1-a98a-8181f91dbad2" (UID: "77e31dbf-0ae6-4cb1-a98a-8181f91dbad2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:30:02 crc kubenswrapper[5058]: I1014 07:30:02.964058 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77e31dbf-0ae6-4cb1-a98a-8181f91dbad2" (UID: "77e31dbf-0ae6-4cb1-a98a-8181f91dbad2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:30:02 crc kubenswrapper[5058]: I1014 07:30:02.964104 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-kube-api-access-l88wb" (OuterVolumeSpecName: "kube-api-access-l88wb") pod "77e31dbf-0ae6-4cb1-a98a-8181f91dbad2" (UID: "77e31dbf-0ae6-4cb1-a98a-8181f91dbad2"). InnerVolumeSpecName "kube-api-access-l88wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:30:03 crc kubenswrapper[5058]: I1014 07:30:03.056396 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l88wb\" (UniqueName: \"kubernetes.io/projected/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-kube-api-access-l88wb\") on node \"crc\" DevicePath \"\"" Oct 14 07:30:03 crc kubenswrapper[5058]: I1014 07:30:03.056434 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 07:30:03 crc kubenswrapper[5058]: I1014 07:30:03.056447 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 07:30:03 crc kubenswrapper[5058]: I1014 07:30:03.524372 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" event={"ID":"77e31dbf-0ae6-4cb1-a98a-8181f91dbad2","Type":"ContainerDied","Data":"122c73005315535bc71182d3f7ec26b35920666b92e9d4744a5f9f93536047d8"} Oct 14 07:30:03 crc kubenswrapper[5058]: I1014 07:30:03.524409 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122c73005315535bc71182d3f7ec26b35920666b92e9d4744a5f9f93536047d8" Oct 14 07:30:03 crc kubenswrapper[5058]: I1014 07:30:03.524404 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm" Oct 14 07:30:03 crc kubenswrapper[5058]: I1014 07:30:03.969308 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng"] Oct 14 07:30:03 crc kubenswrapper[5058]: I1014 07:30:03.980602 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340405-tpnng"] Oct 14 07:30:04 crc kubenswrapper[5058]: I1014 07:30:04.801597 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cab2f8a-7c4f-4faa-a89c-8d566cc1b057" path="/var/lib/kubelet/pods/0cab2f8a-7c4f-4faa-a89c-8d566cc1b057/volumes" Oct 14 07:30:09 crc kubenswrapper[5058]: I1014 07:30:09.791090 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:30:10 crc kubenswrapper[5058]: I1014 07:30:10.584045 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"28cff826e9215ef9bfaa7132d9d957a3fa0e1725a30ec2578135a60e7a70f939"} Oct 14 07:30:38 crc kubenswrapper[5058]: I1014 07:30:38.216559 5058 scope.go:117] "RemoveContainer" containerID="7be7bdee3f79c3c2070935c0f70b7595231101391214a31adfee54b5118d8568" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.225281 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zr4cr"] Oct 14 07:32:26 crc kubenswrapper[5058]: E1014 07:32:26.226578 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e31dbf-0ae6-4cb1-a98a-8181f91dbad2" containerName="collect-profiles" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.226611 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e31dbf-0ae6-4cb1-a98a-8181f91dbad2" containerName="collect-profiles" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.226974 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e31dbf-0ae6-4cb1-a98a-8181f91dbad2" containerName="collect-profiles" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.229140 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.242353 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zr4cr"] Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.353321 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-utilities\") pod \"certified-operators-zr4cr\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.353417 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwcdn\" (UniqueName: \"kubernetes.io/projected/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-kube-api-access-mwcdn\") pod \"certified-operators-zr4cr\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.353494 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-catalog-content\") pod \"certified-operators-zr4cr\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.454984 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-catalog-content\") pod \"certified-operators-zr4cr\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.455060 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-utilities\") pod \"certified-operators-zr4cr\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.455096 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwcdn\" (UniqueName: \"kubernetes.io/projected/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-kube-api-access-mwcdn\") pod \"certified-operators-zr4cr\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.455681 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-utilities\") pod \"certified-operators-zr4cr\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.455743 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-catalog-content\") pod \"certified-operators-zr4cr\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.477872 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwcdn\" (UniqueName: \"kubernetes.io/projected/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-kube-api-access-mwcdn\") pod \"certified-operators-zr4cr\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:26 crc kubenswrapper[5058]: I1014 07:32:26.607369 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:27 crc kubenswrapper[5058]: I1014 07:32:27.052530 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zr4cr"] Oct 14 07:32:27 crc kubenswrapper[5058]: W1014 07:32:27.068004 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81b72ccc_1d9b_4ab5_a5cf_7c90e037ac39.slice/crio-1fa8a0f0992727589d637f922423e567a4aa3ba7fc883267f74b8a1b9ac7017b WatchSource:0}: Error finding container 1fa8a0f0992727589d637f922423e567a4aa3ba7fc883267f74b8a1b9ac7017b: Status 404 returned error can't find the container with id 1fa8a0f0992727589d637f922423e567a4aa3ba7fc883267f74b8a1b9ac7017b Oct 14 07:32:27 crc kubenswrapper[5058]: I1014 07:32:27.808382 5058 generic.go:334] "Generic (PLEG): container finished" podID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerID="f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1" exitCode=0 Oct 14 07:32:27 crc kubenswrapper[5058]: I1014 07:32:27.808448 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr4cr" event={"ID":"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39","Type":"ContainerDied","Data":"f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1"} Oct 14 07:32:27 crc kubenswrapper[5058]: I1014 07:32:27.808843 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr4cr" event={"ID":"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39","Type":"ContainerStarted","Data":"1fa8a0f0992727589d637f922423e567a4aa3ba7fc883267f74b8a1b9ac7017b"} Oct 14 07:32:27 crc kubenswrapper[5058]: I1014 07:32:27.811060 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 07:32:29 crc kubenswrapper[5058]: I1014 07:32:29.830678 5058 generic.go:334] "Generic (PLEG): container finished" podID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerID="4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343" exitCode=0 Oct 14 07:32:29 crc kubenswrapper[5058]: I1014 07:32:29.830792 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr4cr" event={"ID":"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39","Type":"ContainerDied","Data":"4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343"} Oct 14 07:32:31 crc kubenswrapper[5058]: I1014 07:32:31.849867 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr4cr" event={"ID":"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39","Type":"ContainerStarted","Data":"1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793"} Oct 14 07:32:31 crc kubenswrapper[5058]: I1014 07:32:31.873442 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zr4cr" podStartSLOduration=2.911143049 podStartE2EDuration="5.873424443s" podCreationTimestamp="2025-10-14 07:32:26 +0000 UTC" firstStartedPulling="2025-10-14 07:32:27.810605112 +0000 UTC m=+2695.721688958" lastFinishedPulling="2025-10-14 07:32:30.772886536 +0000 UTC m=+2698.683970352" observedRunningTime="2025-10-14 07:32:31.868201336 +0000 UTC m=+2699.779285162" watchObservedRunningTime="2025-10-14 07:32:31.873424443 +0000 UTC m=+2699.784508259" Oct 14 07:32:33 crc kubenswrapper[5058]: I1014 07:32:33.657892 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:32:33 crc kubenswrapper[5058]: I1014 07:32:33.658296 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:32:36 crc kubenswrapper[5058]: I1014 07:32:36.608359 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:36 crc kubenswrapper[5058]: I1014 07:32:36.608744 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:36 crc kubenswrapper[5058]: I1014 07:32:36.685655 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:36 crc kubenswrapper[5058]: I1014 07:32:36.945663 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:37 crc kubenswrapper[5058]: I1014 07:32:37.008745 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zr4cr"] Oct 14 07:32:38 crc kubenswrapper[5058]: I1014 07:32:38.911290 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zr4cr" podUID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerName="registry-server" containerID="cri-o://1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793" gracePeriod=2 Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.386371 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.449617 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwcdn\" (UniqueName: \"kubernetes.io/projected/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-kube-api-access-mwcdn\") pod \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.449705 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-utilities\") pod \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.449722 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-catalog-content\") pod \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\" (UID: \"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39\") " Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.461016 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-utilities" (OuterVolumeSpecName: "utilities") pod "81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" (UID: "81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.468057 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-kube-api-access-mwcdn" (OuterVolumeSpecName: "kube-api-access-mwcdn") pod "81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" (UID: "81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39"). InnerVolumeSpecName "kube-api-access-mwcdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.551188 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwcdn\" (UniqueName: \"kubernetes.io/projected/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-kube-api-access-mwcdn\") on node \"crc\" DevicePath \"\"" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.551232 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.667272 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" (UID: "81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.754210 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.929256 5058 generic.go:334] "Generic (PLEG): container finished" podID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerID="1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793" exitCode=0 Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.929332 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr4cr" event={"ID":"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39","Type":"ContainerDied","Data":"1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793"} Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.929383 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr4cr" event={"ID":"81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39","Type":"ContainerDied","Data":"1fa8a0f0992727589d637f922423e567a4aa3ba7fc883267f74b8a1b9ac7017b"} Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.929417 5058 scope.go:117] "RemoveContainer" containerID="1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.929437 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr4cr" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.985232 5058 scope.go:117] "RemoveContainer" containerID="4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343" Oct 14 07:32:39 crc kubenswrapper[5058]: I1014 07:32:39.999778 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zr4cr"] Oct 14 07:32:40 crc kubenswrapper[5058]: I1014 07:32:40.011053 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zr4cr"] Oct 14 07:32:40 crc kubenswrapper[5058]: I1014 07:32:40.029182 5058 scope.go:117] "RemoveContainer" containerID="f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1" Oct 14 07:32:40 crc kubenswrapper[5058]: I1014 07:32:40.059467 5058 scope.go:117] "RemoveContainer" containerID="1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793" Oct 14 07:32:40 crc kubenswrapper[5058]: E1014 07:32:40.060005 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793\": container with ID starting with 1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793 not found: ID does not exist" containerID="1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793" Oct 14 07:32:40 crc kubenswrapper[5058]: I1014 07:32:40.060079 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793"} err="failed to get container status \"1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793\": rpc error: code = NotFound desc = could not find container \"1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793\": container with ID starting with 1430e4591366fe19708bd945a5df5cb830514977c76ac2aa4ffa2e4ea3a8e793 not found: ID does not exist" Oct 14 07:32:40 crc kubenswrapper[5058]: I1014 07:32:40.060122 5058 scope.go:117] "RemoveContainer" containerID="4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343" Oct 14 07:32:40 crc kubenswrapper[5058]: E1014 07:32:40.060578 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343\": container with ID starting with 4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343 not found: ID does not exist" containerID="4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343" Oct 14 07:32:40 crc kubenswrapper[5058]: I1014 07:32:40.060664 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343"} err="failed to get container status \"4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343\": rpc error: code = NotFound desc = could not find container \"4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343\": container with ID starting with 4aebba4e89aab5e910989943ea3417668b7c2fe5567378338ba4d6d16b318343 not found: ID does not exist" Oct 14 07:32:40 crc kubenswrapper[5058]: I1014 07:32:40.060699 5058 scope.go:117] "RemoveContainer" containerID="f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1" Oct 14 07:32:40 crc kubenswrapper[5058]: E1014 07:32:40.061116 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1\": container with ID starting with f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1 not found: ID does not exist" containerID="f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1" Oct 14 07:32:40 crc kubenswrapper[5058]: I1014 07:32:40.061185 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1"} err="failed to get container status \"f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1\": rpc error: code = NotFound desc = could not find container \"f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1\": container with ID starting with f87d9fdc67fb6d22474aa897d5b414872ef9e19af8a45b1d177091a1c7bc9ed1 not found: ID does not exist" Oct 14 07:32:40 crc kubenswrapper[5058]: I1014 07:32:40.806568 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" path="/var/lib/kubelet/pods/81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39/volumes" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.482697 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zhm75"] Oct 14 07:32:51 crc kubenswrapper[5058]: E1014 07:32:51.484166 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerName="extract-utilities" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.484200 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerName="extract-utilities" Oct 14 07:32:51 crc kubenswrapper[5058]: E1014 07:32:51.484231 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerName="extract-content" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.484249 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerName="extract-content" Oct 14 07:32:51 crc kubenswrapper[5058]: E1014 07:32:51.484293 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerName="registry-server" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.484311 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerName="registry-server" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.484647 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b72ccc-1d9b-4ab5-a5cf-7c90e037ac39" containerName="registry-server" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.487356 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.499730 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhm75"] Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.537406 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-catalog-content\") pod \"redhat-operators-zhm75\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.537732 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-utilities\") pod \"redhat-operators-zhm75\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.537806 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxbw5\" (UniqueName: \"kubernetes.io/projected/e1397b8b-c1ca-489e-ba2e-23a57f11781d-kube-api-access-fxbw5\") pod \"redhat-operators-zhm75\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.639315 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-catalog-content\") pod \"redhat-operators-zhm75\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.639409 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-utilities\") pod \"redhat-operators-zhm75\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.639459 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxbw5\" (UniqueName: \"kubernetes.io/projected/e1397b8b-c1ca-489e-ba2e-23a57f11781d-kube-api-access-fxbw5\") pod \"redhat-operators-zhm75\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.640046 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-catalog-content\") pod \"redhat-operators-zhm75\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.640112 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-utilities\") pod \"redhat-operators-zhm75\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.660340 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxbw5\" (UniqueName: \"kubernetes.io/projected/e1397b8b-c1ca-489e-ba2e-23a57f11781d-kube-api-access-fxbw5\") pod \"redhat-operators-zhm75\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:51 crc kubenswrapper[5058]: I1014 07:32:51.817211 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:32:52 crc kubenswrapper[5058]: I1014 07:32:52.272021 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhm75"] Oct 14 07:32:53 crc kubenswrapper[5058]: I1014 07:32:53.057466 5058 generic.go:334] "Generic (PLEG): container finished" podID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerID="646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a" exitCode=0 Oct 14 07:32:53 crc kubenswrapper[5058]: I1014 07:32:53.057525 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhm75" event={"ID":"e1397b8b-c1ca-489e-ba2e-23a57f11781d","Type":"ContainerDied","Data":"646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a"} Oct 14 07:32:53 crc kubenswrapper[5058]: I1014 07:32:53.057565 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhm75" event={"ID":"e1397b8b-c1ca-489e-ba2e-23a57f11781d","Type":"ContainerStarted","Data":"dd2cb20ce367cfd97bb3c8ad5985cb1a7650224ceb52a4b509559daa619a7ae5"} Oct 14 07:32:54 crc kubenswrapper[5058]: I1014 07:32:54.070369 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhm75" event={"ID":"e1397b8b-c1ca-489e-ba2e-23a57f11781d","Type":"ContainerStarted","Data":"fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35"} Oct 14 07:32:55 crc kubenswrapper[5058]: I1014 07:32:55.079117 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhm75" event={"ID":"e1397b8b-c1ca-489e-ba2e-23a57f11781d","Type":"ContainerDied","Data":"fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35"} Oct 14 07:32:55 crc kubenswrapper[5058]: I1014 07:32:55.079065 5058 generic.go:334] "Generic (PLEG): container finished" podID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerID="fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35" exitCode=0 Oct 14 07:32:56 crc kubenswrapper[5058]: I1014 07:32:56.101141 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhm75" event={"ID":"e1397b8b-c1ca-489e-ba2e-23a57f11781d","Type":"ContainerStarted","Data":"d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a"} Oct 14 07:32:56 crc kubenswrapper[5058]: I1014 07:32:56.132681 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zhm75" podStartSLOduration=2.689116237 podStartE2EDuration="5.132659445s" podCreationTimestamp="2025-10-14 07:32:51 +0000 UTC" firstStartedPulling="2025-10-14 07:32:53.060979197 +0000 UTC m=+2720.972063043" lastFinishedPulling="2025-10-14 07:32:55.504522415 +0000 UTC m=+2723.415606251" observedRunningTime="2025-10-14 07:32:56.123165518 +0000 UTC m=+2724.034249364" watchObservedRunningTime="2025-10-14 07:32:56.132659445 +0000 UTC m=+2724.043743261" Oct 14 07:33:01 crc kubenswrapper[5058]: I1014 07:33:01.817619 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:33:01 crc kubenswrapper[5058]: I1014 07:33:01.818183 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:33:01 crc kubenswrapper[5058]: I1014 07:33:01.877718 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:33:02 crc kubenswrapper[5058]: I1014 07:33:02.223121 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:33:02 crc kubenswrapper[5058]: I1014 07:33:02.286973 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhm75"] Oct 14 07:33:03 crc kubenswrapper[5058]: I1014 07:33:03.656236 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:33:03 crc kubenswrapper[5058]: I1014 07:33:03.658163 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:33:04 crc kubenswrapper[5058]: I1014 07:33:04.169792 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zhm75" podUID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerName="registry-server" containerID="cri-o://d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a" gracePeriod=2 Oct 14 07:33:04 crc kubenswrapper[5058]: I1014 07:33:04.624197 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:33:04 crc kubenswrapper[5058]: I1014 07:33:04.754124 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-catalog-content\") pod \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " Oct 14 07:33:04 crc kubenswrapper[5058]: I1014 07:33:04.754160 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxbw5\" (UniqueName: \"kubernetes.io/projected/e1397b8b-c1ca-489e-ba2e-23a57f11781d-kube-api-access-fxbw5\") pod \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " Oct 14 07:33:04 crc kubenswrapper[5058]: I1014 07:33:04.754199 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-utilities\") pod \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\" (UID: \"e1397b8b-c1ca-489e-ba2e-23a57f11781d\") " Oct 14 07:33:04 crc kubenswrapper[5058]: I1014 07:33:04.756086 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-utilities" (OuterVolumeSpecName: "utilities") pod "e1397b8b-c1ca-489e-ba2e-23a57f11781d" (UID: "e1397b8b-c1ca-489e-ba2e-23a57f11781d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:33:04 crc kubenswrapper[5058]: I1014 07:33:04.778776 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1397b8b-c1ca-489e-ba2e-23a57f11781d-kube-api-access-fxbw5" (OuterVolumeSpecName: "kube-api-access-fxbw5") pod "e1397b8b-c1ca-489e-ba2e-23a57f11781d" (UID: "e1397b8b-c1ca-489e-ba2e-23a57f11781d"). InnerVolumeSpecName "kube-api-access-fxbw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:33:04 crc kubenswrapper[5058]: I1014 07:33:04.856526 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxbw5\" (UniqueName: \"kubernetes.io/projected/e1397b8b-c1ca-489e-ba2e-23a57f11781d-kube-api-access-fxbw5\") on node \"crc\" DevicePath \"\"" Oct 14 07:33:04 crc kubenswrapper[5058]: I1014 07:33:04.856574 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.183683 5058 generic.go:334] "Generic (PLEG): container finished" podID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerID="d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a" exitCode=0 Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.183735 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhm75" event={"ID":"e1397b8b-c1ca-489e-ba2e-23a57f11781d","Type":"ContainerDied","Data":"d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a"} Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.183775 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhm75" event={"ID":"e1397b8b-c1ca-489e-ba2e-23a57f11781d","Type":"ContainerDied","Data":"dd2cb20ce367cfd97bb3c8ad5985cb1a7650224ceb52a4b509559daa619a7ae5"} Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.183816 5058 scope.go:117] "RemoveContainer" containerID="d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.183867 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhm75" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.208248 5058 scope.go:117] "RemoveContainer" containerID="fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.230657 5058 scope.go:117] "RemoveContainer" containerID="646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.272154 5058 scope.go:117] "RemoveContainer" containerID="d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a" Oct 14 07:33:05 crc kubenswrapper[5058]: E1014 07:33:05.272598 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a\": container with ID starting with d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a not found: ID does not exist" containerID="d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.272642 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a"} err="failed to get container status \"d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a\": rpc error: code = NotFound desc = could not find container \"d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a\": container with ID starting with d8e61f03f201d28f27f99109fe468dda32a79e2c72655e6a3aa6e34890c97d0a not found: ID does not exist" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.272674 5058 scope.go:117] "RemoveContainer" containerID="fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35" Oct 14 07:33:05 crc kubenswrapper[5058]: E1014 07:33:05.273145 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35\": container with ID starting with fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35 not found: ID does not exist" containerID="fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.273182 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35"} err="failed to get container status \"fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35\": rpc error: code = NotFound desc = could not find container \"fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35\": container with ID starting with fff4eabd7334fa7b604b1ad82ad0329c353e238f02464c017d2a1d953e7b2a35 not found: ID does not exist" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.273212 5058 scope.go:117] "RemoveContainer" containerID="646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a" Oct 14 07:33:05 crc kubenswrapper[5058]: E1014 07:33:05.273670 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a\": container with ID starting with 646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a not found: ID does not exist" containerID="646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.273699 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a"} err="failed to get container status \"646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a\": rpc error: code = NotFound desc = could not find container \"646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a\": container with ID starting with 646c5fb5e5df9f1e78a482b3477d91649daf40212491ccd14d3fa157be141e0a not found: ID does not exist" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.832295 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1397b8b-c1ca-489e-ba2e-23a57f11781d" (UID: "e1397b8b-c1ca-489e-ba2e-23a57f11781d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:33:05 crc kubenswrapper[5058]: I1014 07:33:05.873635 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1397b8b-c1ca-489e-ba2e-23a57f11781d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:33:06 crc kubenswrapper[5058]: I1014 07:33:06.133866 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhm75"] Oct 14 07:33:06 crc kubenswrapper[5058]: I1014 07:33:06.140596 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zhm75"] Oct 14 07:33:06 crc kubenswrapper[5058]: I1014 07:33:06.806409 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" path="/var/lib/kubelet/pods/e1397b8b-c1ca-489e-ba2e-23a57f11781d/volumes" Oct 14 07:33:33 crc kubenswrapper[5058]: I1014 07:33:33.655745 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:33:33 crc kubenswrapper[5058]: I1014 07:33:33.657270 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:33:33 crc kubenswrapper[5058]: I1014 07:33:33.657368 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:33:33 crc kubenswrapper[5058]: I1014 07:33:33.658212 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28cff826e9215ef9bfaa7132d9d957a3fa0e1725a30ec2578135a60e7a70f939"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:33:33 crc kubenswrapper[5058]: I1014 07:33:33.658306 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://28cff826e9215ef9bfaa7132d9d957a3fa0e1725a30ec2578135a60e7a70f939" gracePeriod=600 Oct 14 07:33:34 crc kubenswrapper[5058]: I1014 07:33:34.452323 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="28cff826e9215ef9bfaa7132d9d957a3fa0e1725a30ec2578135a60e7a70f939" exitCode=0 Oct 14 07:33:34 crc kubenswrapper[5058]: I1014 07:33:34.452398 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"28cff826e9215ef9bfaa7132d9d957a3fa0e1725a30ec2578135a60e7a70f939"} Oct 14 07:33:34 crc kubenswrapper[5058]: I1014 07:33:34.453775 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9"} Oct 14 07:33:34 crc kubenswrapper[5058]: I1014 07:33:34.453816 5058 scope.go:117] "RemoveContainer" containerID="0d08ef6d2c5ee3ed5b8786ec7caeecacd3209f93966dd8da886ab8aa12182799" Oct 14 07:35:33 crc kubenswrapper[5058]: I1014 07:35:33.656478 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:35:33 crc kubenswrapper[5058]: I1014 07:35:33.658005 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.783181 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-225sw"] Oct 14 07:35:55 crc kubenswrapper[5058]: E1014 07:35:55.784186 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerName="extract-utilities" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.784204 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerName="extract-utilities" Oct 14 07:35:55 crc kubenswrapper[5058]: E1014 07:35:55.784220 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerName="registry-server" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.784229 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerName="registry-server" Oct 14 07:35:55 crc kubenswrapper[5058]: E1014 07:35:55.784270 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerName="extract-content" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.784278 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerName="extract-content" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.784517 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1397b8b-c1ca-489e-ba2e-23a57f11781d" containerName="registry-server" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.785902 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.793828 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-225sw"] Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.896688 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r9jx\" (UniqueName: \"kubernetes.io/projected/c8b5d51c-ae71-4b85-bb3f-768c509cb1b1-kube-api-access-5r9jx\") pod \"community-operators-225sw\" (UID: \"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1\") " pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.896898 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b5d51c-ae71-4b85-bb3f-768c509cb1b1-catalog-content\") pod \"community-operators-225sw\" (UID: \"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1\") " pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.896958 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b5d51c-ae71-4b85-bb3f-768c509cb1b1-utilities\") pod \"community-operators-225sw\" (UID: \"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1\") " pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.998212 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b5d51c-ae71-4b85-bb3f-768c509cb1b1-catalog-content\") pod \"community-operators-225sw\" (UID: \"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1\") " pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.998352 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b5d51c-ae71-4b85-bb3f-768c509cb1b1-utilities\") pod \"community-operators-225sw\" (UID: \"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1\") " pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.998413 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r9jx\" (UniqueName: \"kubernetes.io/projected/c8b5d51c-ae71-4b85-bb3f-768c509cb1b1-kube-api-access-5r9jx\") pod \"community-operators-225sw\" (UID: \"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1\") " pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.999266 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b5d51c-ae71-4b85-bb3f-768c509cb1b1-utilities\") pod \"community-operators-225sw\" (UID: \"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1\") " pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:55 crc kubenswrapper[5058]: I1014 07:35:55.999423 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b5d51c-ae71-4b85-bb3f-768c509cb1b1-catalog-content\") pod \"community-operators-225sw\" (UID: \"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1\") " pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:56 crc kubenswrapper[5058]: I1014 07:35:56.021233 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r9jx\" (UniqueName: \"kubernetes.io/projected/c8b5d51c-ae71-4b85-bb3f-768c509cb1b1-kube-api-access-5r9jx\") pod \"community-operators-225sw\" (UID: \"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1\") " pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:56 crc kubenswrapper[5058]: I1014 07:35:56.129373 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-225sw" Oct 14 07:35:56 crc kubenswrapper[5058]: I1014 07:35:56.627314 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-225sw"] Oct 14 07:35:56 crc kubenswrapper[5058]: I1014 07:35:56.643562 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-225sw" event={"ID":"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1","Type":"ContainerStarted","Data":"4396d26ae742891782194aa5fbdf0412bb6160b30a24bd1609a523ce182b96a0"} Oct 14 07:35:57 crc kubenswrapper[5058]: I1014 07:35:57.674373 5058 generic.go:334] "Generic (PLEG): container finished" podID="c8b5d51c-ae71-4b85-bb3f-768c509cb1b1" containerID="1f446dcba4d962fb05baf024be3b78b761d228e46534d3102632a62f7123d160" exitCode=0 Oct 14 07:35:57 crc kubenswrapper[5058]: I1014 07:35:57.674512 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-225sw" event={"ID":"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1","Type":"ContainerDied","Data":"1f446dcba4d962fb05baf024be3b78b761d228e46534d3102632a62f7123d160"} Oct 14 07:36:01 crc kubenswrapper[5058]: I1014 07:36:01.711160 5058 generic.go:334] "Generic (PLEG): container finished" podID="c8b5d51c-ae71-4b85-bb3f-768c509cb1b1" containerID="42b6da0d1c22405a8ee06caf4c676e43a81fd8bc4e515328c39cb8d191db3561" exitCode=0 Oct 14 07:36:01 crc kubenswrapper[5058]: I1014 07:36:01.711219 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-225sw" event={"ID":"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1","Type":"ContainerDied","Data":"42b6da0d1c22405a8ee06caf4c676e43a81fd8bc4e515328c39cb8d191db3561"} Oct 14 07:36:02 crc kubenswrapper[5058]: I1014 07:36:02.723605 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-225sw" event={"ID":"c8b5d51c-ae71-4b85-bb3f-768c509cb1b1","Type":"ContainerStarted","Data":"18763d61d317a3d32d6561ca17b5f146ea7c135e9aed2e1a9717661a36fd0d53"} Oct 14 07:36:02 crc kubenswrapper[5058]: I1014 07:36:02.753163 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-225sw" podStartSLOduration=3.17486853 podStartE2EDuration="7.753131366s" podCreationTimestamp="2025-10-14 07:35:55 +0000 UTC" firstStartedPulling="2025-10-14 07:35:57.677118703 +0000 UTC m=+2905.588202519" lastFinishedPulling="2025-10-14 07:36:02.255381549 +0000 UTC m=+2910.166465355" observedRunningTime="2025-10-14 07:36:02.74259909 +0000 UTC m=+2910.653682916" watchObservedRunningTime="2025-10-14 07:36:02.753131366 +0000 UTC m=+2910.664215202" Oct 14 07:36:03 crc kubenswrapper[5058]: I1014 07:36:03.655841 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:36:03 crc kubenswrapper[5058]: I1014 07:36:03.655940 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:36:06 crc kubenswrapper[5058]: I1014 07:36:06.130539 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-225sw" Oct 14 07:36:06 crc kubenswrapper[5058]: I1014 07:36:06.131124 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-225sw" Oct 14 07:36:06 crc kubenswrapper[5058]: I1014 07:36:06.191843 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-225sw" Oct 14 07:36:09 crc kubenswrapper[5058]: I1014 07:36:09.975863 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4n5ld"] Oct 14 07:36:09 crc kubenswrapper[5058]: I1014 07:36:09.980641 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:09.994219 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n5ld"] Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.027485 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-catalog-content\") pod \"redhat-marketplace-4n5ld\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.027695 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6hjf\" (UniqueName: \"kubernetes.io/projected/d61390a3-fd9c-453c-856b-ea04de789715-kube-api-access-p6hjf\") pod \"redhat-marketplace-4n5ld\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.027957 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-utilities\") pod \"redhat-marketplace-4n5ld\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.129729 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6hjf\" (UniqueName: \"kubernetes.io/projected/d61390a3-fd9c-453c-856b-ea04de789715-kube-api-access-p6hjf\") pod \"redhat-marketplace-4n5ld\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.129784 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-utilities\") pod \"redhat-marketplace-4n5ld\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.129855 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-catalog-content\") pod \"redhat-marketplace-4n5ld\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.130354 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-catalog-content\") pod \"redhat-marketplace-4n5ld\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.130453 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-utilities\") pod \"redhat-marketplace-4n5ld\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.166413 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6hjf\" (UniqueName: \"kubernetes.io/projected/d61390a3-fd9c-453c-856b-ea04de789715-kube-api-access-p6hjf\") pod \"redhat-marketplace-4n5ld\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.332412 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.751151 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n5ld"] Oct 14 07:36:10 crc kubenswrapper[5058]: I1014 07:36:10.779512 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n5ld" event={"ID":"d61390a3-fd9c-453c-856b-ea04de789715","Type":"ContainerStarted","Data":"7ae4a535a2ab47719f45a1b36e502d5a055d2ac684af78e8ccd7b81adb109f73"} Oct 14 07:36:11 crc kubenswrapper[5058]: I1014 07:36:11.788744 5058 generic.go:334] "Generic (PLEG): container finished" podID="d61390a3-fd9c-453c-856b-ea04de789715" containerID="ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2" exitCode=0 Oct 14 07:36:11 crc kubenswrapper[5058]: I1014 07:36:11.788967 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n5ld" event={"ID":"d61390a3-fd9c-453c-856b-ea04de789715","Type":"ContainerDied","Data":"ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2"} Oct 14 07:36:12 crc kubenswrapper[5058]: I1014 07:36:12.810429 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n5ld" event={"ID":"d61390a3-fd9c-453c-856b-ea04de789715","Type":"ContainerStarted","Data":"a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3"} Oct 14 07:36:13 crc kubenswrapper[5058]: I1014 07:36:13.816214 5058 generic.go:334] "Generic (PLEG): container finished" podID="d61390a3-fd9c-453c-856b-ea04de789715" containerID="a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3" exitCode=0 Oct 14 07:36:13 crc kubenswrapper[5058]: I1014 07:36:13.816290 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n5ld" event={"ID":"d61390a3-fd9c-453c-856b-ea04de789715","Type":"ContainerDied","Data":"a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3"} Oct 14 07:36:14 crc kubenswrapper[5058]: I1014 07:36:14.826908 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n5ld" event={"ID":"d61390a3-fd9c-453c-856b-ea04de789715","Type":"ContainerStarted","Data":"8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2"} Oct 14 07:36:14 crc kubenswrapper[5058]: I1014 07:36:14.849078 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4n5ld" podStartSLOduration=3.328015327 podStartE2EDuration="5.849060959s" podCreationTimestamp="2025-10-14 07:36:09 +0000 UTC" firstStartedPulling="2025-10-14 07:36:11.790225709 +0000 UTC m=+2919.701309525" lastFinishedPulling="2025-10-14 07:36:14.311271341 +0000 UTC m=+2922.222355157" observedRunningTime="2025-10-14 07:36:14.845077347 +0000 UTC m=+2922.756161173" watchObservedRunningTime="2025-10-14 07:36:14.849060959 +0000 UTC m=+2922.760144765" Oct 14 07:36:16 crc kubenswrapper[5058]: I1014 07:36:16.200784 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-225sw" Oct 14 07:36:16 crc kubenswrapper[5058]: I1014 07:36:16.970626 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-225sw"] Oct 14 07:36:17 crc kubenswrapper[5058]: I1014 07:36:17.157405 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgh6z"] Oct 14 07:36:17 crc kubenswrapper[5058]: I1014 07:36:17.157767 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qgh6z" podUID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerName="registry-server" containerID="cri-o://7574808b8fb0172a1433f15e27139e95d3f7d33337f8c7120c66aa402d7e7665" gracePeriod=2 Oct 14 07:36:17 crc kubenswrapper[5058]: I1014 07:36:17.863168 5058 generic.go:334] "Generic (PLEG): container finished" podID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerID="7574808b8fb0172a1433f15e27139e95d3f7d33337f8c7120c66aa402d7e7665" exitCode=0 Oct 14 07:36:17 crc kubenswrapper[5058]: I1014 07:36:17.863231 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgh6z" event={"ID":"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe","Type":"ContainerDied","Data":"7574808b8fb0172a1433f15e27139e95d3f7d33337f8c7120c66aa402d7e7665"} Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.304846 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgh6z" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.454038 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-catalog-content\") pod \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.454172 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-utilities\") pod \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.454241 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnjlg\" (UniqueName: \"kubernetes.io/projected/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-kube-api-access-xnjlg\") pod \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\" (UID: \"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe\") " Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.458657 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-utilities" (OuterVolumeSpecName: "utilities") pod "3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" (UID: "3c1b4d67-6d7d-4656-80cd-ac114b08ffbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.467861 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-kube-api-access-xnjlg" (OuterVolumeSpecName: "kube-api-access-xnjlg") pod "3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" (UID: "3c1b4d67-6d7d-4656-80cd-ac114b08ffbe"). InnerVolumeSpecName "kube-api-access-xnjlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.523968 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" (UID: "3c1b4d67-6d7d-4656-80cd-ac114b08ffbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.558629 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.558665 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.558675 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnjlg\" (UniqueName: \"kubernetes.io/projected/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe-kube-api-access-xnjlg\") on node \"crc\" DevicePath \"\"" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.873858 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgh6z" event={"ID":"3c1b4d67-6d7d-4656-80cd-ac114b08ffbe","Type":"ContainerDied","Data":"31a9f51166d286d6eba1f211817033840266af88c44ea03357c18c64e6a397d2"} Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.873907 5058 scope.go:117] "RemoveContainer" containerID="7574808b8fb0172a1433f15e27139e95d3f7d33337f8c7120c66aa402d7e7665" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.873935 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgh6z" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.894357 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgh6z"] Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.898685 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qgh6z"] Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.899140 5058 scope.go:117] "RemoveContainer" containerID="081c4d93f69a27d6638d6667c726fc4bcff4751ff657ba3e7ad320df4b421bec" Oct 14 07:36:18 crc kubenswrapper[5058]: I1014 07:36:18.921390 5058 scope.go:117] "RemoveContainer" containerID="a851e36534c0086221486e384e09cac4c458229f7266aa51c4964456a27c2969" Oct 14 07:36:20 crc kubenswrapper[5058]: I1014 07:36:20.332533 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:20 crc kubenswrapper[5058]: I1014 07:36:20.332586 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:20 crc kubenswrapper[5058]: I1014 07:36:20.398305 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:20 crc kubenswrapper[5058]: I1014 07:36:20.803888 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" path="/var/lib/kubelet/pods/3c1b4d67-6d7d-4656-80cd-ac114b08ffbe/volumes" Oct 14 07:36:20 crc kubenswrapper[5058]: I1014 07:36:20.958193 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:22 crc kubenswrapper[5058]: I1014 07:36:22.356260 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n5ld"] Oct 14 07:36:22 crc kubenswrapper[5058]: I1014 07:36:22.908285 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4n5ld" podUID="d61390a3-fd9c-453c-856b-ea04de789715" containerName="registry-server" containerID="cri-o://8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2" gracePeriod=2 Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.339392 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.427685 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-catalog-content\") pod \"d61390a3-fd9c-453c-856b-ea04de789715\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.427751 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6hjf\" (UniqueName: \"kubernetes.io/projected/d61390a3-fd9c-453c-856b-ea04de789715-kube-api-access-p6hjf\") pod \"d61390a3-fd9c-453c-856b-ea04de789715\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.427791 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-utilities\") pod \"d61390a3-fd9c-453c-856b-ea04de789715\" (UID: \"d61390a3-fd9c-453c-856b-ea04de789715\") " Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.428837 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-utilities" (OuterVolumeSpecName: "utilities") pod "d61390a3-fd9c-453c-856b-ea04de789715" (UID: "d61390a3-fd9c-453c-856b-ea04de789715"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.433014 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61390a3-fd9c-453c-856b-ea04de789715-kube-api-access-p6hjf" (OuterVolumeSpecName: "kube-api-access-p6hjf") pod "d61390a3-fd9c-453c-856b-ea04de789715" (UID: "d61390a3-fd9c-453c-856b-ea04de789715"). InnerVolumeSpecName "kube-api-access-p6hjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.445617 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d61390a3-fd9c-453c-856b-ea04de789715" (UID: "d61390a3-fd9c-453c-856b-ea04de789715"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.528685 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.528712 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6hjf\" (UniqueName: \"kubernetes.io/projected/d61390a3-fd9c-453c-856b-ea04de789715-kube-api-access-p6hjf\") on node \"crc\" DevicePath \"\"" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.528725 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d61390a3-fd9c-453c-856b-ea04de789715-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.925279 5058 generic.go:334] "Generic (PLEG): container finished" podID="d61390a3-fd9c-453c-856b-ea04de789715" containerID="8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2" exitCode=0 Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.925425 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4n5ld" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.925986 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n5ld" event={"ID":"d61390a3-fd9c-453c-856b-ea04de789715","Type":"ContainerDied","Data":"8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2"} Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.926183 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n5ld" event={"ID":"d61390a3-fd9c-453c-856b-ea04de789715","Type":"ContainerDied","Data":"7ae4a535a2ab47719f45a1b36e502d5a055d2ac684af78e8ccd7b81adb109f73"} Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.926250 5058 scope.go:117] "RemoveContainer" containerID="8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.961558 5058 scope.go:117] "RemoveContainer" containerID="a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3" Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.984318 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n5ld"] Oct 14 07:36:23 crc kubenswrapper[5058]: I1014 07:36:23.994744 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n5ld"] Oct 14 07:36:24 crc kubenswrapper[5058]: I1014 07:36:24.004142 5058 scope.go:117] "RemoveContainer" containerID="ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2" Oct 14 07:36:24 crc kubenswrapper[5058]: I1014 07:36:24.041185 5058 scope.go:117] "RemoveContainer" containerID="8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2" Oct 14 07:36:24 crc kubenswrapper[5058]: E1014 07:36:24.041925 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2\": container with ID starting with 8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2 not found: ID does not exist" containerID="8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2" Oct 14 07:36:24 crc kubenswrapper[5058]: I1014 07:36:24.041999 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2"} err="failed to get container status \"8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2\": rpc error: code = NotFound desc = could not find container \"8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2\": container with ID starting with 8edbcfbb6d5b653fac2e1a3a0990b14f23aedafd6c8c0b360abf1b34c07017b2 not found: ID does not exist" Oct 14 07:36:24 crc kubenswrapper[5058]: I1014 07:36:24.042047 5058 scope.go:117] "RemoveContainer" containerID="a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3" Oct 14 07:36:24 crc kubenswrapper[5058]: E1014 07:36:24.042708 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3\": container with ID starting with a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3 not found: ID does not exist" containerID="a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3" Oct 14 07:36:24 crc kubenswrapper[5058]: I1014 07:36:24.042756 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3"} err="failed to get container status \"a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3\": rpc error: code = NotFound desc = could not find container \"a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3\": container with ID starting with a5232ff3dd00a28f4240fc0354fbd057f5805a7e5f8145ce68e0d53781ae0fb3 not found: ID does not exist" Oct 14 07:36:24 crc kubenswrapper[5058]: I1014 07:36:24.042790 5058 scope.go:117] "RemoveContainer" containerID="ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2" Oct 14 07:36:24 crc kubenswrapper[5058]: E1014 07:36:24.043211 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2\": container with ID starting with ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2 not found: ID does not exist" containerID="ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2" Oct 14 07:36:24 crc kubenswrapper[5058]: I1014 07:36:24.043276 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2"} err="failed to get container status \"ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2\": rpc error: code = NotFound desc = could not find container \"ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2\": container with ID starting with ac16a3cdb7a88ea843d2cbfc2c43fafbcf7d9e12d4349173977fd63f90079cd2 not found: ID does not exist" Oct 14 07:36:24 crc kubenswrapper[5058]: I1014 07:36:24.806565 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61390a3-fd9c-453c-856b-ea04de789715" path="/var/lib/kubelet/pods/d61390a3-fd9c-453c-856b-ea04de789715/volumes" Oct 14 07:36:33 crc kubenswrapper[5058]: I1014 07:36:33.655770 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:36:33 crc kubenswrapper[5058]: I1014 07:36:33.656413 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:36:33 crc kubenswrapper[5058]: I1014 07:36:33.656475 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:36:33 crc kubenswrapper[5058]: I1014 07:36:33.657340 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:36:33 crc kubenswrapper[5058]: I1014 07:36:33.657435 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" gracePeriod=600 Oct 14 07:36:33 crc kubenswrapper[5058]: E1014 07:36:33.786191 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:36:34 crc kubenswrapper[5058]: I1014 07:36:34.033483 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" exitCode=0 Oct 14 07:36:34 crc kubenswrapper[5058]: I1014 07:36:34.033527 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9"} Oct 14 07:36:34 crc kubenswrapper[5058]: I1014 07:36:34.033582 5058 scope.go:117] "RemoveContainer" containerID="28cff826e9215ef9bfaa7132d9d957a3fa0e1725a30ec2578135a60e7a70f939" Oct 14 07:36:34 crc kubenswrapper[5058]: I1014 07:36:34.034211 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:36:34 crc kubenswrapper[5058]: E1014 07:36:34.034508 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:36:47 crc kubenswrapper[5058]: I1014 07:36:47.790611 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:36:47 crc kubenswrapper[5058]: E1014 07:36:47.791447 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:37:00 crc kubenswrapper[5058]: I1014 07:37:00.789818 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:37:00 crc kubenswrapper[5058]: E1014 07:37:00.790621 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:37:12 crc kubenswrapper[5058]: I1014 07:37:12.798293 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:37:12 crc kubenswrapper[5058]: E1014 07:37:12.799136 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:37:27 crc kubenswrapper[5058]: I1014 07:37:27.789624 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:37:27 crc kubenswrapper[5058]: E1014 07:37:27.790597 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:37:41 crc kubenswrapper[5058]: I1014 07:37:41.790319 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:37:41 crc kubenswrapper[5058]: E1014 07:37:41.791068 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:37:55 crc kubenswrapper[5058]: I1014 07:37:55.790512 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:37:55 crc kubenswrapper[5058]: E1014 07:37:55.791521 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:38:08 crc kubenswrapper[5058]: I1014 07:38:08.790400 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:38:08 crc kubenswrapper[5058]: E1014 07:38:08.791342 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:38:21 crc kubenswrapper[5058]: I1014 07:38:21.789267 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:38:21 crc kubenswrapper[5058]: E1014 07:38:21.789918 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:38:35 crc kubenswrapper[5058]: I1014 07:38:35.790406 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:38:35 crc kubenswrapper[5058]: E1014 07:38:35.791122 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:38:48 crc kubenswrapper[5058]: I1014 07:38:48.790911 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:38:48 crc kubenswrapper[5058]: E1014 07:38:48.792155 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:39:02 crc kubenswrapper[5058]: I1014 07:39:02.796622 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:39:02 crc kubenswrapper[5058]: E1014 07:39:02.798205 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:39:16 crc kubenswrapper[5058]: I1014 07:39:16.790775 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:39:16 crc kubenswrapper[5058]: E1014 07:39:16.792077 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:39:31 crc kubenswrapper[5058]: I1014 07:39:31.790534 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:39:31 crc kubenswrapper[5058]: E1014 07:39:31.791580 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:39:44 crc kubenswrapper[5058]: I1014 07:39:44.790654 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:39:44 crc kubenswrapper[5058]: E1014 07:39:44.792078 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:39:55 crc kubenswrapper[5058]: I1014 07:39:55.790198 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:39:55 crc kubenswrapper[5058]: E1014 07:39:55.790812 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:40:10 crc kubenswrapper[5058]: I1014 07:40:10.790612 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:40:10 crc kubenswrapper[5058]: E1014 07:40:10.791508 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:40:23 crc kubenswrapper[5058]: I1014 07:40:23.790516 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:40:23 crc kubenswrapper[5058]: E1014 07:40:23.791969 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:40:37 crc kubenswrapper[5058]: I1014 07:40:37.790073 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:40:37 crc kubenswrapper[5058]: E1014 07:40:37.791039 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:40:49 crc kubenswrapper[5058]: I1014 07:40:49.790448 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:40:49 crc kubenswrapper[5058]: E1014 07:40:49.791554 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:41:02 crc kubenswrapper[5058]: I1014 07:41:02.795435 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:41:02 crc kubenswrapper[5058]: E1014 07:41:02.796294 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:41:13 crc kubenswrapper[5058]: I1014 07:41:13.790427 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:41:13 crc kubenswrapper[5058]: E1014 07:41:13.791422 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:41:28 crc kubenswrapper[5058]: I1014 07:41:28.790510 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:41:28 crc kubenswrapper[5058]: E1014 07:41:28.791204 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:41:40 crc kubenswrapper[5058]: I1014 07:41:40.790192 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:41:41 crc kubenswrapper[5058]: I1014 07:41:41.538014 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"37265925e6cf57fe185a2b5cb88e1b9004ce10b93db3a44bcabb685050412451"} Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.613083 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vdmrb"] Oct 14 07:42:40 crc kubenswrapper[5058]: E1014 07:42:40.614020 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerName="registry-server" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.614039 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerName="registry-server" Oct 14 07:42:40 crc kubenswrapper[5058]: E1014 07:42:40.614054 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61390a3-fd9c-453c-856b-ea04de789715" containerName="registry-server" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.614063 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61390a3-fd9c-453c-856b-ea04de789715" containerName="registry-server" Oct 14 07:42:40 crc kubenswrapper[5058]: E1014 07:42:40.614076 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61390a3-fd9c-453c-856b-ea04de789715" containerName="extract-content" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.614083 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61390a3-fd9c-453c-856b-ea04de789715" containerName="extract-content" Oct 14 07:42:40 crc kubenswrapper[5058]: E1014 07:42:40.614097 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61390a3-fd9c-453c-856b-ea04de789715" containerName="extract-utilities" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.614105 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61390a3-fd9c-453c-856b-ea04de789715" containerName="extract-utilities" Oct 14 07:42:40 crc kubenswrapper[5058]: E1014 07:42:40.614125 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerName="extract-content" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.614132 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerName="extract-content" Oct 14 07:42:40 crc kubenswrapper[5058]: E1014 07:42:40.614147 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerName="extract-utilities" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.614154 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerName="extract-utilities" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.614311 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c1b4d67-6d7d-4656-80cd-ac114b08ffbe" containerName="registry-server" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.616974 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61390a3-fd9c-453c-856b-ea04de789715" containerName="registry-server" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.619004 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.619713 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vdmrb"] Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.695076 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-utilities\") pod \"certified-operators-vdmrb\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.695156 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-catalog-content\") pod \"certified-operators-vdmrb\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.695190 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmll\" (UniqueName: \"kubernetes.io/projected/bba8f469-d80a-41f5-9e13-cbae37546dc5-kube-api-access-zcmll\") pod \"certified-operators-vdmrb\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.797481 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-catalog-content\") pod \"certified-operators-vdmrb\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.797543 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmll\" (UniqueName: \"kubernetes.io/projected/bba8f469-d80a-41f5-9e13-cbae37546dc5-kube-api-access-zcmll\") pod \"certified-operators-vdmrb\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.797598 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-utilities\") pod \"certified-operators-vdmrb\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.798207 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-utilities\") pod \"certified-operators-vdmrb\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.798380 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-catalog-content\") pod \"certified-operators-vdmrb\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.817253 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmll\" (UniqueName: \"kubernetes.io/projected/bba8f469-d80a-41f5-9e13-cbae37546dc5-kube-api-access-zcmll\") pod \"certified-operators-vdmrb\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:40 crc kubenswrapper[5058]: I1014 07:42:40.941891 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:41 crc kubenswrapper[5058]: I1014 07:42:41.234046 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vdmrb"] Oct 14 07:42:42 crc kubenswrapper[5058]: I1014 07:42:42.103448 5058 generic.go:334] "Generic (PLEG): container finished" podID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerID="8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91" exitCode=0 Oct 14 07:42:42 crc kubenswrapper[5058]: I1014 07:42:42.103528 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdmrb" event={"ID":"bba8f469-d80a-41f5-9e13-cbae37546dc5","Type":"ContainerDied","Data":"8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91"} Oct 14 07:42:42 crc kubenswrapper[5058]: I1014 07:42:42.103879 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdmrb" event={"ID":"bba8f469-d80a-41f5-9e13-cbae37546dc5","Type":"ContainerStarted","Data":"609b858fb5e36ea56a2dd101cae485480ef2a83609660a2cb6d8f96af594b616"} Oct 14 07:42:42 crc kubenswrapper[5058]: I1014 07:42:42.106322 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 07:42:44 crc kubenswrapper[5058]: I1014 07:42:44.122236 5058 generic.go:334] "Generic (PLEG): container finished" podID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerID="f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829" exitCode=0 Oct 14 07:42:44 crc kubenswrapper[5058]: I1014 07:42:44.122333 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdmrb" event={"ID":"bba8f469-d80a-41f5-9e13-cbae37546dc5","Type":"ContainerDied","Data":"f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829"} Oct 14 07:42:45 crc kubenswrapper[5058]: I1014 07:42:45.130542 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdmrb" event={"ID":"bba8f469-d80a-41f5-9e13-cbae37546dc5","Type":"ContainerStarted","Data":"43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d"} Oct 14 07:42:45 crc kubenswrapper[5058]: I1014 07:42:45.152280 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vdmrb" podStartSLOduration=2.620070525 podStartE2EDuration="5.152261931s" podCreationTimestamp="2025-10-14 07:42:40 +0000 UTC" firstStartedPulling="2025-10-14 07:42:42.106114004 +0000 UTC m=+3310.017197810" lastFinishedPulling="2025-10-14 07:42:44.6383054 +0000 UTC m=+3312.549389216" observedRunningTime="2025-10-14 07:42:45.146414036 +0000 UTC m=+3313.057497842" watchObservedRunningTime="2025-10-14 07:42:45.152261931 +0000 UTC m=+3313.063345737" Oct 14 07:42:50 crc kubenswrapper[5058]: I1014 07:42:50.943099 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:50 crc kubenswrapper[5058]: I1014 07:42:50.943981 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:51 crc kubenswrapper[5058]: I1014 07:42:51.019421 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:51 crc kubenswrapper[5058]: I1014 07:42:51.260451 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:51 crc kubenswrapper[5058]: I1014 07:42:51.299776 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vdmrb"] Oct 14 07:42:53 crc kubenswrapper[5058]: I1014 07:42:53.228013 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vdmrb" podUID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerName="registry-server" containerID="cri-o://43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d" gracePeriod=2 Oct 14 07:42:53 crc kubenswrapper[5058]: I1014 07:42:53.708385 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:53 crc kubenswrapper[5058]: I1014 07:42:53.788499 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcmll\" (UniqueName: \"kubernetes.io/projected/bba8f469-d80a-41f5-9e13-cbae37546dc5-kube-api-access-zcmll\") pod \"bba8f469-d80a-41f5-9e13-cbae37546dc5\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " Oct 14 07:42:53 crc kubenswrapper[5058]: I1014 07:42:53.789038 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-utilities\") pod \"bba8f469-d80a-41f5-9e13-cbae37546dc5\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " Oct 14 07:42:53 crc kubenswrapper[5058]: I1014 07:42:53.789781 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-utilities" (OuterVolumeSpecName: "utilities") pod "bba8f469-d80a-41f5-9e13-cbae37546dc5" (UID: "bba8f469-d80a-41f5-9e13-cbae37546dc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:42:53 crc kubenswrapper[5058]: I1014 07:42:53.789868 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-catalog-content\") pod \"bba8f469-d80a-41f5-9e13-cbae37546dc5\" (UID: \"bba8f469-d80a-41f5-9e13-cbae37546dc5\") " Oct 14 07:42:53 crc kubenswrapper[5058]: I1014 07:42:53.791400 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:42:53 crc kubenswrapper[5058]: I1014 07:42:53.795465 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba8f469-d80a-41f5-9e13-cbae37546dc5-kube-api-access-zcmll" (OuterVolumeSpecName: "kube-api-access-zcmll") pod "bba8f469-d80a-41f5-9e13-cbae37546dc5" (UID: "bba8f469-d80a-41f5-9e13-cbae37546dc5"). InnerVolumeSpecName "kube-api-access-zcmll". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:42:53 crc kubenswrapper[5058]: I1014 07:42:53.892640 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcmll\" (UniqueName: \"kubernetes.io/projected/bba8f469-d80a-41f5-9e13-cbae37546dc5-kube-api-access-zcmll\") on node \"crc\" DevicePath \"\"" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.192695 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bba8f469-d80a-41f5-9e13-cbae37546dc5" (UID: "bba8f469-d80a-41f5-9e13-cbae37546dc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.196723 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba8f469-d80a-41f5-9e13-cbae37546dc5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.238744 5058 generic.go:334] "Generic (PLEG): container finished" podID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerID="43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d" exitCode=0 Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.238838 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdmrb" event={"ID":"bba8f469-d80a-41f5-9e13-cbae37546dc5","Type":"ContainerDied","Data":"43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d"} Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.238877 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vdmrb" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.238939 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vdmrb" event={"ID":"bba8f469-d80a-41f5-9e13-cbae37546dc5","Type":"ContainerDied","Data":"609b858fb5e36ea56a2dd101cae485480ef2a83609660a2cb6d8f96af594b616"} Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.238991 5058 scope.go:117] "RemoveContainer" containerID="43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.261834 5058 scope.go:117] "RemoveContainer" containerID="f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.288668 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vdmrb"] Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.297088 5058 scope.go:117] "RemoveContainer" containerID="8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.300514 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vdmrb"] Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.326633 5058 scope.go:117] "RemoveContainer" containerID="43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d" Oct 14 07:42:54 crc kubenswrapper[5058]: E1014 07:42:54.327063 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d\": container with ID starting with 43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d not found: ID does not exist" containerID="43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.327101 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d"} err="failed to get container status \"43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d\": rpc error: code = NotFound desc = could not find container \"43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d\": container with ID starting with 43fa7bc34a01276c82de158dd64e69cc77bceaaf38146e94991d30315865176d not found: ID does not exist" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.327125 5058 scope.go:117] "RemoveContainer" containerID="f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829" Oct 14 07:42:54 crc kubenswrapper[5058]: E1014 07:42:54.327623 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829\": container with ID starting with f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829 not found: ID does not exist" containerID="f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.327665 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829"} err="failed to get container status \"f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829\": rpc error: code = NotFound desc = could not find container \"f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829\": container with ID starting with f7233789ba54070298da4dd061a880351590866461663170ec913750ba30b829 not found: ID does not exist" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.327695 5058 scope.go:117] "RemoveContainer" containerID="8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91" Oct 14 07:42:54 crc kubenswrapper[5058]: E1014 07:42:54.328047 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91\": container with ID starting with 8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91 not found: ID does not exist" containerID="8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.328083 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91"} err="failed to get container status \"8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91\": rpc error: code = NotFound desc = could not find container \"8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91\": container with ID starting with 8cefd48bb7a96361bd9e24dc8d1ab8d2a20b63e095acffb0c8bb8576c57b7c91 not found: ID does not exist" Oct 14 07:42:54 crc kubenswrapper[5058]: I1014 07:42:54.804226 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bba8f469-d80a-41f5-9e13-cbae37546dc5" path="/var/lib/kubelet/pods/bba8f469-d80a-41f5-9e13-cbae37546dc5/volumes" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.438509 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-22xl6"] Oct 14 07:43:00 crc kubenswrapper[5058]: E1014 07:43:00.442972 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerName="extract-utilities" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.443169 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerName="extract-utilities" Oct 14 07:43:00 crc kubenswrapper[5058]: E1014 07:43:00.443317 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerName="registry-server" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.443429 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerName="registry-server" Oct 14 07:43:00 crc kubenswrapper[5058]: E1014 07:43:00.443629 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerName="extract-content" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.443776 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerName="extract-content" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.444344 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba8f469-d80a-41f5-9e13-cbae37546dc5" containerName="registry-server" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.446910 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.448067 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22xl6"] Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.601534 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-utilities\") pod \"redhat-operators-22xl6\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.602013 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-catalog-content\") pod \"redhat-operators-22xl6\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.602310 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7jl\" (UniqueName: \"kubernetes.io/projected/4f96ee50-ea25-483f-b449-d3811c8ed1fa-kube-api-access-7w7jl\") pod \"redhat-operators-22xl6\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.703535 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-utilities\") pod \"redhat-operators-22xl6\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.703599 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-catalog-content\") pod \"redhat-operators-22xl6\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.703654 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w7jl\" (UniqueName: \"kubernetes.io/projected/4f96ee50-ea25-483f-b449-d3811c8ed1fa-kube-api-access-7w7jl\") pod \"redhat-operators-22xl6\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.704142 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-utilities\") pod \"redhat-operators-22xl6\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.704254 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-catalog-content\") pod \"redhat-operators-22xl6\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.732268 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w7jl\" (UniqueName: \"kubernetes.io/projected/4f96ee50-ea25-483f-b449-d3811c8ed1fa-kube-api-access-7w7jl\") pod \"redhat-operators-22xl6\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:00 crc kubenswrapper[5058]: I1014 07:43:00.837172 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:01 crc kubenswrapper[5058]: I1014 07:43:01.302296 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22xl6"] Oct 14 07:43:01 crc kubenswrapper[5058]: W1014 07:43:01.309376 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f96ee50_ea25_483f_b449_d3811c8ed1fa.slice/crio-1ef429a15d32e8e6b7d8b818d9a9318d971af4bf04c3cd124427c874086f0e54 WatchSource:0}: Error finding container 1ef429a15d32e8e6b7d8b818d9a9318d971af4bf04c3cd124427c874086f0e54: Status 404 returned error can't find the container with id 1ef429a15d32e8e6b7d8b818d9a9318d971af4bf04c3cd124427c874086f0e54 Oct 14 07:43:02 crc kubenswrapper[5058]: I1014 07:43:02.306550 5058 generic.go:334] "Generic (PLEG): container finished" podID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerID="2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7" exitCode=0 Oct 14 07:43:02 crc kubenswrapper[5058]: I1014 07:43:02.306604 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22xl6" event={"ID":"4f96ee50-ea25-483f-b449-d3811c8ed1fa","Type":"ContainerDied","Data":"2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7"} Oct 14 07:43:02 crc kubenswrapper[5058]: I1014 07:43:02.306935 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22xl6" event={"ID":"4f96ee50-ea25-483f-b449-d3811c8ed1fa","Type":"ContainerStarted","Data":"1ef429a15d32e8e6b7d8b818d9a9318d971af4bf04c3cd124427c874086f0e54"} Oct 14 07:43:03 crc kubenswrapper[5058]: I1014 07:43:03.316034 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22xl6" event={"ID":"4f96ee50-ea25-483f-b449-d3811c8ed1fa","Type":"ContainerStarted","Data":"389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802"} Oct 14 07:43:04 crc kubenswrapper[5058]: I1014 07:43:04.328466 5058 generic.go:334] "Generic (PLEG): container finished" podID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerID="389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802" exitCode=0 Oct 14 07:43:04 crc kubenswrapper[5058]: I1014 07:43:04.328546 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22xl6" event={"ID":"4f96ee50-ea25-483f-b449-d3811c8ed1fa","Type":"ContainerDied","Data":"389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802"} Oct 14 07:43:05 crc kubenswrapper[5058]: I1014 07:43:05.345307 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22xl6" event={"ID":"4f96ee50-ea25-483f-b449-d3811c8ed1fa","Type":"ContainerStarted","Data":"ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93"} Oct 14 07:43:05 crc kubenswrapper[5058]: I1014 07:43:05.366870 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-22xl6" podStartSLOduration=2.815911576 podStartE2EDuration="5.366841069s" podCreationTimestamp="2025-10-14 07:43:00 +0000 UTC" firstStartedPulling="2025-10-14 07:43:02.308843339 +0000 UTC m=+3330.219927195" lastFinishedPulling="2025-10-14 07:43:04.859772842 +0000 UTC m=+3332.770856688" observedRunningTime="2025-10-14 07:43:05.361650183 +0000 UTC m=+3333.272733989" watchObservedRunningTime="2025-10-14 07:43:05.366841069 +0000 UTC m=+3333.277924905" Oct 14 07:43:10 crc kubenswrapper[5058]: I1014 07:43:10.837766 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:10 crc kubenswrapper[5058]: I1014 07:43:10.838689 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:10 crc kubenswrapper[5058]: I1014 07:43:10.915221 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:11 crc kubenswrapper[5058]: I1014 07:43:11.439358 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:11 crc kubenswrapper[5058]: I1014 07:43:11.808646 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-22xl6"] Oct 14 07:43:13 crc kubenswrapper[5058]: I1014 07:43:13.408146 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-22xl6" podUID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerName="registry-server" containerID="cri-o://ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93" gracePeriod=2 Oct 14 07:43:13 crc kubenswrapper[5058]: I1014 07:43:13.791929 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:13 crc kubenswrapper[5058]: I1014 07:43:13.992675 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w7jl\" (UniqueName: \"kubernetes.io/projected/4f96ee50-ea25-483f-b449-d3811c8ed1fa-kube-api-access-7w7jl\") pod \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " Oct 14 07:43:13 crc kubenswrapper[5058]: I1014 07:43:13.992766 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-catalog-content\") pod \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " Oct 14 07:43:13 crc kubenswrapper[5058]: I1014 07:43:13.992810 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-utilities\") pod \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\" (UID: \"4f96ee50-ea25-483f-b449-d3811c8ed1fa\") " Oct 14 07:43:13 crc kubenswrapper[5058]: I1014 07:43:13.994141 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-utilities" (OuterVolumeSpecName: "utilities") pod "4f96ee50-ea25-483f-b449-d3811c8ed1fa" (UID: "4f96ee50-ea25-483f-b449-d3811c8ed1fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.006754 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f96ee50-ea25-483f-b449-d3811c8ed1fa-kube-api-access-7w7jl" (OuterVolumeSpecName: "kube-api-access-7w7jl") pod "4f96ee50-ea25-483f-b449-d3811c8ed1fa" (UID: "4f96ee50-ea25-483f-b449-d3811c8ed1fa"). InnerVolumeSpecName "kube-api-access-7w7jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.094366 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w7jl\" (UniqueName: \"kubernetes.io/projected/4f96ee50-ea25-483f-b449-d3811c8ed1fa-kube-api-access-7w7jl\") on node \"crc\" DevicePath \"\"" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.094413 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.419405 5058 generic.go:334] "Generic (PLEG): container finished" podID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerID="ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93" exitCode=0 Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.419513 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22xl6" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.419471 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22xl6" event={"ID":"4f96ee50-ea25-483f-b449-d3811c8ed1fa","Type":"ContainerDied","Data":"ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93"} Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.419697 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22xl6" event={"ID":"4f96ee50-ea25-483f-b449-d3811c8ed1fa","Type":"ContainerDied","Data":"1ef429a15d32e8e6b7d8b818d9a9318d971af4bf04c3cd124427c874086f0e54"} Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.419728 5058 scope.go:117] "RemoveContainer" containerID="ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.461736 5058 scope.go:117] "RemoveContainer" containerID="389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.488839 5058 scope.go:117] "RemoveContainer" containerID="2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.520138 5058 scope.go:117] "RemoveContainer" containerID="ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93" Oct 14 07:43:14 crc kubenswrapper[5058]: E1014 07:43:14.520785 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93\": container with ID starting with ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93 not found: ID does not exist" containerID="ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.520970 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93"} err="failed to get container status \"ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93\": rpc error: code = NotFound desc = could not find container \"ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93\": container with ID starting with ec43afeea6232d8111df0d897e7a4f98401335e91859ed3991246d52cb808f93 not found: ID does not exist" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.521010 5058 scope.go:117] "RemoveContainer" containerID="389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802" Oct 14 07:43:14 crc kubenswrapper[5058]: E1014 07:43:14.521367 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802\": container with ID starting with 389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802 not found: ID does not exist" containerID="389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.521431 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802"} err="failed to get container status \"389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802\": rpc error: code = NotFound desc = could not find container \"389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802\": container with ID starting with 389981f029f3d88985ec0b2ada055beb6d42658735e5f63594d30e4211577802 not found: ID does not exist" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.521464 5058 scope.go:117] "RemoveContainer" containerID="2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7" Oct 14 07:43:14 crc kubenswrapper[5058]: E1014 07:43:14.521988 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7\": container with ID starting with 2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7 not found: ID does not exist" containerID="2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7" Oct 14 07:43:14 crc kubenswrapper[5058]: I1014 07:43:14.522033 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7"} err="failed to get container status \"2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7\": rpc error: code = NotFound desc = could not find container \"2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7\": container with ID starting with 2f78b40c404804a8970d35f6eaa5d44d5871ffdc2881c63cf9ebaa2ad9dbf8f7 not found: ID does not exist" Oct 14 07:43:15 crc kubenswrapper[5058]: I1014 07:43:15.260286 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f96ee50-ea25-483f-b449-d3811c8ed1fa" (UID: "4f96ee50-ea25-483f-b449-d3811c8ed1fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:43:15 crc kubenswrapper[5058]: I1014 07:43:15.312711 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f96ee50-ea25-483f-b449-d3811c8ed1fa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:43:15 crc kubenswrapper[5058]: I1014 07:43:15.384970 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-22xl6"] Oct 14 07:43:15 crc kubenswrapper[5058]: I1014 07:43:15.395155 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-22xl6"] Oct 14 07:43:16 crc kubenswrapper[5058]: I1014 07:43:16.802985 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" path="/var/lib/kubelet/pods/4f96ee50-ea25-483f-b449-d3811c8ed1fa/volumes" Oct 14 07:44:03 crc kubenswrapper[5058]: I1014 07:44:03.656543 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:44:03 crc kubenswrapper[5058]: I1014 07:44:03.657168 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:44:33 crc kubenswrapper[5058]: I1014 07:44:33.655773 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:44:33 crc kubenswrapper[5058]: I1014 07:44:33.656359 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.192075 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw"] Oct 14 07:45:00 crc kubenswrapper[5058]: E1014 07:45:00.193473 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerName="registry-server" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.193491 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerName="registry-server" Oct 14 07:45:00 crc kubenswrapper[5058]: E1014 07:45:00.193536 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerName="extract-content" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.193542 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerName="extract-content" Oct 14 07:45:00 crc kubenswrapper[5058]: E1014 07:45:00.193555 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerName="extract-utilities" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.193563 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerName="extract-utilities" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.193714 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f96ee50-ea25-483f-b449-d3811c8ed1fa" containerName="registry-server" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.194607 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.197483 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.197534 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.204509 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw"] Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.238849 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae82042-84ed-4477-a677-ea1cba5b70a9-secret-volume\") pod \"collect-profiles-29340465-wwkdw\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.238933 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq9gh\" (UniqueName: \"kubernetes.io/projected/6ae82042-84ed-4477-a677-ea1cba5b70a9-kube-api-access-mq9gh\") pod \"collect-profiles-29340465-wwkdw\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.239030 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae82042-84ed-4477-a677-ea1cba5b70a9-config-volume\") pod \"collect-profiles-29340465-wwkdw\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.340819 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae82042-84ed-4477-a677-ea1cba5b70a9-config-volume\") pod \"collect-profiles-29340465-wwkdw\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.340941 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae82042-84ed-4477-a677-ea1cba5b70a9-secret-volume\") pod \"collect-profiles-29340465-wwkdw\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.340992 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq9gh\" (UniqueName: \"kubernetes.io/projected/6ae82042-84ed-4477-a677-ea1cba5b70a9-kube-api-access-mq9gh\") pod \"collect-profiles-29340465-wwkdw\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.341911 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae82042-84ed-4477-a677-ea1cba5b70a9-config-volume\") pod \"collect-profiles-29340465-wwkdw\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.346905 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae82042-84ed-4477-a677-ea1cba5b70a9-secret-volume\") pod \"collect-profiles-29340465-wwkdw\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.356184 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq9gh\" (UniqueName: \"kubernetes.io/projected/6ae82042-84ed-4477-a677-ea1cba5b70a9-kube-api-access-mq9gh\") pod \"collect-profiles-29340465-wwkdw\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.516123 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:00 crc kubenswrapper[5058]: I1014 07:45:00.988910 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw"] Oct 14 07:45:01 crc kubenswrapper[5058]: I1014 07:45:01.304843 5058 generic.go:334] "Generic (PLEG): container finished" podID="6ae82042-84ed-4477-a677-ea1cba5b70a9" containerID="1df896b0f0917ba7d4d17e56d68f2a463649c05513e56a936a1e1e0ffcb83fea" exitCode=0 Oct 14 07:45:01 crc kubenswrapper[5058]: I1014 07:45:01.304922 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" event={"ID":"6ae82042-84ed-4477-a677-ea1cba5b70a9","Type":"ContainerDied","Data":"1df896b0f0917ba7d4d17e56d68f2a463649c05513e56a936a1e1e0ffcb83fea"} Oct 14 07:45:01 crc kubenswrapper[5058]: I1014 07:45:01.305224 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" event={"ID":"6ae82042-84ed-4477-a677-ea1cba5b70a9","Type":"ContainerStarted","Data":"0a5e7c1acfb408f65292a2a486c57e54e8f327da8cea42607b29fccd58402eb0"} Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.607093 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.674937 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq9gh\" (UniqueName: \"kubernetes.io/projected/6ae82042-84ed-4477-a677-ea1cba5b70a9-kube-api-access-mq9gh\") pod \"6ae82042-84ed-4477-a677-ea1cba5b70a9\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.675145 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae82042-84ed-4477-a677-ea1cba5b70a9-config-volume\") pod \"6ae82042-84ed-4477-a677-ea1cba5b70a9\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.675218 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae82042-84ed-4477-a677-ea1cba5b70a9-secret-volume\") pod \"6ae82042-84ed-4477-a677-ea1cba5b70a9\" (UID: \"6ae82042-84ed-4477-a677-ea1cba5b70a9\") " Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.675727 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae82042-84ed-4477-a677-ea1cba5b70a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "6ae82042-84ed-4477-a677-ea1cba5b70a9" (UID: "6ae82042-84ed-4477-a677-ea1cba5b70a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.681331 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae82042-84ed-4477-a677-ea1cba5b70a9-kube-api-access-mq9gh" (OuterVolumeSpecName: "kube-api-access-mq9gh") pod "6ae82042-84ed-4477-a677-ea1cba5b70a9" (UID: "6ae82042-84ed-4477-a677-ea1cba5b70a9"). InnerVolumeSpecName "kube-api-access-mq9gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.681867 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae82042-84ed-4477-a677-ea1cba5b70a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6ae82042-84ed-4477-a677-ea1cba5b70a9" (UID: "6ae82042-84ed-4477-a677-ea1cba5b70a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.777431 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae82042-84ed-4477-a677-ea1cba5b70a9-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.777468 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae82042-84ed-4477-a677-ea1cba5b70a9-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 07:45:02 crc kubenswrapper[5058]: I1014 07:45:02.777481 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq9gh\" (UniqueName: \"kubernetes.io/projected/6ae82042-84ed-4477-a677-ea1cba5b70a9-kube-api-access-mq9gh\") on node \"crc\" DevicePath \"\"" Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.324172 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" event={"ID":"6ae82042-84ed-4477-a677-ea1cba5b70a9","Type":"ContainerDied","Data":"0a5e7c1acfb408f65292a2a486c57e54e8f327da8cea42607b29fccd58402eb0"} Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.324270 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a5e7c1acfb408f65292a2a486c57e54e8f327da8cea42607b29fccd58402eb0" Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.324270 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw" Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.656001 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.656053 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.656114 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.656731 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37265925e6cf57fe185a2b5cb88e1b9004ce10b93db3a44bcabb685050412451"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.656810 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://37265925e6cf57fe185a2b5cb88e1b9004ce10b93db3a44bcabb685050412451" gracePeriod=600 Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.680709 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g"] Oct 14 07:45:03 crc kubenswrapper[5058]: I1014 07:45:03.685665 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340420-r8j8g"] Oct 14 07:45:04 crc kubenswrapper[5058]: I1014 07:45:04.335785 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="37265925e6cf57fe185a2b5cb88e1b9004ce10b93db3a44bcabb685050412451" exitCode=0 Oct 14 07:45:04 crc kubenswrapper[5058]: I1014 07:45:04.335851 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"37265925e6cf57fe185a2b5cb88e1b9004ce10b93db3a44bcabb685050412451"} Oct 14 07:45:04 crc kubenswrapper[5058]: I1014 07:45:04.336129 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc"} Oct 14 07:45:04 crc kubenswrapper[5058]: I1014 07:45:04.336155 5058 scope.go:117] "RemoveContainer" containerID="df47e5119b91d85a911699b611e73616be4dc2e55140cbc350cdbc3322eae8a9" Oct 14 07:45:04 crc kubenswrapper[5058]: I1014 07:45:04.799955 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcfdfb7-2c52-4e13-919c-d63a7ef2f111" path="/var/lib/kubelet/pods/cdcfdfb7-2c52-4e13-919c-d63a7ef2f111/volumes" Oct 14 07:45:38 crc kubenswrapper[5058]: I1014 07:45:38.636930 5058 scope.go:117] "RemoveContainer" containerID="dcd81169984a0bfe708691a602bcb12d2d2d7d6638201820ee6b92411c854a7a" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.283044 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-89j92"] Oct 14 07:46:15 crc kubenswrapper[5058]: E1014 07:46:15.284000 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae82042-84ed-4477-a677-ea1cba5b70a9" containerName="collect-profiles" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.284017 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae82042-84ed-4477-a677-ea1cba5b70a9" containerName="collect-profiles" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.284177 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae82042-84ed-4477-a677-ea1cba5b70a9" containerName="collect-profiles" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.285117 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.294310 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-89j92"] Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.415781 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-catalog-content\") pod \"community-operators-89j92\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.415910 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-utilities\") pod \"community-operators-89j92\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.415966 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px9r6\" (UniqueName: \"kubernetes.io/projected/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-kube-api-access-px9r6\") pod \"community-operators-89j92\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.517028 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-utilities\") pod \"community-operators-89j92\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.517103 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px9r6\" (UniqueName: \"kubernetes.io/projected/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-kube-api-access-px9r6\") pod \"community-operators-89j92\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.517140 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-catalog-content\") pod \"community-operators-89j92\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.517547 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-utilities\") pod \"community-operators-89j92\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.517648 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-catalog-content\") pod \"community-operators-89j92\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.538266 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px9r6\" (UniqueName: \"kubernetes.io/projected/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-kube-api-access-px9r6\") pod \"community-operators-89j92\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:15 crc kubenswrapper[5058]: I1014 07:46:15.608441 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:16 crc kubenswrapper[5058]: I1014 07:46:16.151618 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-89j92"] Oct 14 07:46:16 crc kubenswrapper[5058]: I1014 07:46:16.937650 5058 generic.go:334] "Generic (PLEG): container finished" podID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerID="7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2" exitCode=0 Oct 14 07:46:16 crc kubenswrapper[5058]: I1014 07:46:16.937743 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89j92" event={"ID":"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659","Type":"ContainerDied","Data":"7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2"} Oct 14 07:46:16 crc kubenswrapper[5058]: I1014 07:46:16.937999 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89j92" event={"ID":"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659","Type":"ContainerStarted","Data":"3b79d8f4fcb0301bf182ad70f72d170516be41264dfa4a402549e27de656a566"} Oct 14 07:46:18 crc kubenswrapper[5058]: I1014 07:46:18.955672 5058 generic.go:334] "Generic (PLEG): container finished" podID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerID="05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c" exitCode=0 Oct 14 07:46:18 crc kubenswrapper[5058]: I1014 07:46:18.955764 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89j92" event={"ID":"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659","Type":"ContainerDied","Data":"05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c"} Oct 14 07:46:19 crc kubenswrapper[5058]: I1014 07:46:19.965888 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89j92" event={"ID":"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659","Type":"ContainerStarted","Data":"5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9"} Oct 14 07:46:19 crc kubenswrapper[5058]: I1014 07:46:19.993580 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-89j92" podStartSLOduration=2.469341248 podStartE2EDuration="4.993543396s" podCreationTimestamp="2025-10-14 07:46:15 +0000 UTC" firstStartedPulling="2025-10-14 07:46:16.939416989 +0000 UTC m=+3524.850500805" lastFinishedPulling="2025-10-14 07:46:19.463619147 +0000 UTC m=+3527.374702953" observedRunningTime="2025-10-14 07:46:19.983294217 +0000 UTC m=+3527.894378083" watchObservedRunningTime="2025-10-14 07:46:19.993543396 +0000 UTC m=+3527.904627242" Oct 14 07:46:25 crc kubenswrapper[5058]: I1014 07:46:25.609143 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:25 crc kubenswrapper[5058]: I1014 07:46:25.609931 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:25 crc kubenswrapper[5058]: I1014 07:46:25.654227 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:26 crc kubenswrapper[5058]: I1014 07:46:26.067099 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:26 crc kubenswrapper[5058]: I1014 07:46:26.128726 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-89j92"] Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.022016 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-89j92" podUID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerName="registry-server" containerID="cri-o://5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9" gracePeriod=2 Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.396422 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.527237 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px9r6\" (UniqueName: \"kubernetes.io/projected/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-kube-api-access-px9r6\") pod \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.527290 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-utilities\") pod \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.527462 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-catalog-content\") pod \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\" (UID: \"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659\") " Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.529052 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-utilities" (OuterVolumeSpecName: "utilities") pod "3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" (UID: "3ffd92b6-1ec7-4591-bee3-6f6aa1f63659"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.536574 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-kube-api-access-px9r6" (OuterVolumeSpecName: "kube-api-access-px9r6") pod "3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" (UID: "3ffd92b6-1ec7-4591-bee3-6f6aa1f63659"). InnerVolumeSpecName "kube-api-access-px9r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.581513 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" (UID: "3ffd92b6-1ec7-4591-bee3-6f6aa1f63659"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.629270 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.629319 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:46:28 crc kubenswrapper[5058]: I1014 07:46:28.629340 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px9r6\" (UniqueName: \"kubernetes.io/projected/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659-kube-api-access-px9r6\") on node \"crc\" DevicePath \"\"" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.035517 5058 generic.go:334] "Generic (PLEG): container finished" podID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerID="5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9" exitCode=0 Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.035575 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89j92" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.035607 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89j92" event={"ID":"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659","Type":"ContainerDied","Data":"5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9"} Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.037143 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89j92" event={"ID":"3ffd92b6-1ec7-4591-bee3-6f6aa1f63659","Type":"ContainerDied","Data":"3b79d8f4fcb0301bf182ad70f72d170516be41264dfa4a402549e27de656a566"} Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.037169 5058 scope.go:117] "RemoveContainer" containerID="5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.066490 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-89j92"] Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.071817 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-89j92"] Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.075011 5058 scope.go:117] "RemoveContainer" containerID="05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.092762 5058 scope.go:117] "RemoveContainer" containerID="7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.113599 5058 scope.go:117] "RemoveContainer" containerID="5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9" Oct 14 07:46:29 crc kubenswrapper[5058]: E1014 07:46:29.114090 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9\": container with ID starting with 5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9 not found: ID does not exist" containerID="5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.114147 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9"} err="failed to get container status \"5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9\": rpc error: code = NotFound desc = could not find container \"5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9\": container with ID starting with 5ab08215fe197c0109d062825fb2fbb743504374b16d7943c15a21107a08c5f9 not found: ID does not exist" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.114174 5058 scope.go:117] "RemoveContainer" containerID="05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c" Oct 14 07:46:29 crc kubenswrapper[5058]: E1014 07:46:29.114514 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c\": container with ID starting with 05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c not found: ID does not exist" containerID="05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.114543 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c"} err="failed to get container status \"05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c\": rpc error: code = NotFound desc = could not find container \"05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c\": container with ID starting with 05053866e6c1ea8ee326d874d748fc2b60d875e7ddf07cb4b923e976c177580c not found: ID does not exist" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.114561 5058 scope.go:117] "RemoveContainer" containerID="7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2" Oct 14 07:46:29 crc kubenswrapper[5058]: E1014 07:46:29.115019 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2\": container with ID starting with 7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2 not found: ID does not exist" containerID="7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2" Oct 14 07:46:29 crc kubenswrapper[5058]: I1014 07:46:29.115051 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2"} err="failed to get container status \"7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2\": rpc error: code = NotFound desc = could not find container \"7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2\": container with ID starting with 7c04d31f6c8dedb0c19725510b4cdc56fd0f89b2de7d33bb946744b78c9781b2 not found: ID does not exist" Oct 14 07:46:30 crc kubenswrapper[5058]: I1014 07:46:30.802330 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" path="/var/lib/kubelet/pods/3ffd92b6-1ec7-4591-bee3-6f6aa1f63659/volumes" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.499631 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dd2td"] Oct 14 07:46:38 crc kubenswrapper[5058]: E1014 07:46:38.500418 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerName="extract-utilities" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.500431 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerName="extract-utilities" Oct 14 07:46:38 crc kubenswrapper[5058]: E1014 07:46:38.500467 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerName="registry-server" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.500477 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerName="registry-server" Oct 14 07:46:38 crc kubenswrapper[5058]: E1014 07:46:38.500489 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerName="extract-content" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.500498 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerName="extract-content" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.500669 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffd92b6-1ec7-4591-bee3-6f6aa1f63659" containerName="registry-server" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.501855 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.526854 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd2td"] Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.671098 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-catalog-content\") pod \"redhat-marketplace-dd2td\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.671144 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swmqc\" (UniqueName: \"kubernetes.io/projected/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-kube-api-access-swmqc\") pod \"redhat-marketplace-dd2td\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.671245 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-utilities\") pod \"redhat-marketplace-dd2td\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.772289 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-catalog-content\") pod \"redhat-marketplace-dd2td\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.772393 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swmqc\" (UniqueName: \"kubernetes.io/projected/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-kube-api-access-swmqc\") pod \"redhat-marketplace-dd2td\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.772449 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-utilities\") pod \"redhat-marketplace-dd2td\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.773001 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-catalog-content\") pod \"redhat-marketplace-dd2td\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.773011 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-utilities\") pod \"redhat-marketplace-dd2td\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.793450 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swmqc\" (UniqueName: \"kubernetes.io/projected/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-kube-api-access-swmqc\") pod \"redhat-marketplace-dd2td\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:38 crc kubenswrapper[5058]: I1014 07:46:38.833641 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:39 crc kubenswrapper[5058]: I1014 07:46:39.313276 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd2td"] Oct 14 07:46:40 crc kubenswrapper[5058]: I1014 07:46:40.129588 5058 generic.go:334] "Generic (PLEG): container finished" podID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerID="ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6" exitCode=0 Oct 14 07:46:40 crc kubenswrapper[5058]: I1014 07:46:40.129634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd2td" event={"ID":"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b","Type":"ContainerDied","Data":"ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6"} Oct 14 07:46:40 crc kubenswrapper[5058]: I1014 07:46:40.129959 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd2td" event={"ID":"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b","Type":"ContainerStarted","Data":"5389ebf34b1907800fa87d8c8724d6d2fce23d9333b47baa1e554e8bf443b5d9"} Oct 14 07:46:41 crc kubenswrapper[5058]: I1014 07:46:41.137784 5058 generic.go:334] "Generic (PLEG): container finished" podID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerID="d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f" exitCode=0 Oct 14 07:46:41 crc kubenswrapper[5058]: I1014 07:46:41.137874 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd2td" event={"ID":"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b","Type":"ContainerDied","Data":"d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f"} Oct 14 07:46:43 crc kubenswrapper[5058]: I1014 07:46:43.156631 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd2td" event={"ID":"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b","Type":"ContainerStarted","Data":"9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487"} Oct 14 07:46:43 crc kubenswrapper[5058]: I1014 07:46:43.183734 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dd2td" podStartSLOduration=3.19012908 podStartE2EDuration="5.18370602s" podCreationTimestamp="2025-10-14 07:46:38 +0000 UTC" firstStartedPulling="2025-10-14 07:46:40.131515248 +0000 UTC m=+3548.042599054" lastFinishedPulling="2025-10-14 07:46:42.125092158 +0000 UTC m=+3550.036175994" observedRunningTime="2025-10-14 07:46:43.175127078 +0000 UTC m=+3551.086210904" watchObservedRunningTime="2025-10-14 07:46:43.18370602 +0000 UTC m=+3551.094789866" Oct 14 07:46:48 crc kubenswrapper[5058]: I1014 07:46:48.834198 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:48 crc kubenswrapper[5058]: I1014 07:46:48.834771 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:48 crc kubenswrapper[5058]: I1014 07:46:48.883162 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:49 crc kubenswrapper[5058]: I1014 07:46:49.261782 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:49 crc kubenswrapper[5058]: I1014 07:46:49.311960 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd2td"] Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.225378 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dd2td" podUID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerName="registry-server" containerID="cri-o://9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487" gracePeriod=2 Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.600871 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.654993 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-utilities\") pod \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.655091 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swmqc\" (UniqueName: \"kubernetes.io/projected/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-kube-api-access-swmqc\") pod \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.655186 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-catalog-content\") pod \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\" (UID: \"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b\") " Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.655897 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-utilities" (OuterVolumeSpecName: "utilities") pod "82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" (UID: "82c1ff5b-7b09-413c-9bce-8b1d87eaff3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.662695 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-kube-api-access-swmqc" (OuterVolumeSpecName: "kube-api-access-swmqc") pod "82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" (UID: "82c1ff5b-7b09-413c-9bce-8b1d87eaff3b"). InnerVolumeSpecName "kube-api-access-swmqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.670861 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" (UID: "82c1ff5b-7b09-413c-9bce-8b1d87eaff3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.757111 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.757393 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swmqc\" (UniqueName: \"kubernetes.io/projected/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-kube-api-access-swmqc\") on node \"crc\" DevicePath \"\"" Oct 14 07:46:51 crc kubenswrapper[5058]: I1014 07:46:51.757408 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.239745 5058 generic.go:334] "Generic (PLEG): container finished" podID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerID="9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487" exitCode=0 Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.239863 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd2td" event={"ID":"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b","Type":"ContainerDied","Data":"9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487"} Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.239901 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd2td" event={"ID":"82c1ff5b-7b09-413c-9bce-8b1d87eaff3b","Type":"ContainerDied","Data":"5389ebf34b1907800fa87d8c8724d6d2fce23d9333b47baa1e554e8bf443b5d9"} Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.239924 5058 scope.go:117] "RemoveContainer" containerID="9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.241431 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dd2td" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.261513 5058 scope.go:117] "RemoveContainer" containerID="d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.284663 5058 scope.go:117] "RemoveContainer" containerID="ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.285698 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd2td"] Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.299007 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd2td"] Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.327926 5058 scope.go:117] "RemoveContainer" containerID="9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487" Oct 14 07:46:52 crc kubenswrapper[5058]: E1014 07:46:52.328322 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487\": container with ID starting with 9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487 not found: ID does not exist" containerID="9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.328608 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487"} err="failed to get container status \"9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487\": rpc error: code = NotFound desc = could not find container \"9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487\": container with ID starting with 9a576c18dc191adad6df2038cdb20c96d90f723e69dc641cf11547975e247487 not found: ID does not exist" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.328640 5058 scope.go:117] "RemoveContainer" containerID="d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f" Oct 14 07:46:52 crc kubenswrapper[5058]: E1014 07:46:52.328961 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f\": container with ID starting with d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f not found: ID does not exist" containerID="d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.328988 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f"} err="failed to get container status \"d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f\": rpc error: code = NotFound desc = could not find container \"d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f\": container with ID starting with d1d8bf51330e4a871658cef0aed001d1ca31ef626de897861d00e812242e879f not found: ID does not exist" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.329002 5058 scope.go:117] "RemoveContainer" containerID="ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6" Oct 14 07:46:52 crc kubenswrapper[5058]: E1014 07:46:52.329383 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6\": container with ID starting with ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6 not found: ID does not exist" containerID="ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.329410 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6"} err="failed to get container status \"ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6\": rpc error: code = NotFound desc = could not find container \"ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6\": container with ID starting with ab09c1b02b79b744f60b29bff71ebadb408053c0b49b830bebe1ccba63fec5e6 not found: ID does not exist" Oct 14 07:46:52 crc kubenswrapper[5058]: I1014 07:46:52.803781 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" path="/var/lib/kubelet/pods/82c1ff5b-7b09-413c-9bce-8b1d87eaff3b/volumes" Oct 14 07:47:03 crc kubenswrapper[5058]: I1014 07:47:03.656171 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:47:03 crc kubenswrapper[5058]: I1014 07:47:03.657036 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:47:33 crc kubenswrapper[5058]: I1014 07:47:33.656130 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:47:33 crc kubenswrapper[5058]: I1014 07:47:33.656786 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:48:03 crc kubenswrapper[5058]: I1014 07:48:03.656934 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:48:03 crc kubenswrapper[5058]: I1014 07:48:03.657576 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:48:03 crc kubenswrapper[5058]: I1014 07:48:03.657634 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:48:03 crc kubenswrapper[5058]: I1014 07:48:03.658328 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:48:03 crc kubenswrapper[5058]: I1014 07:48:03.658393 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" gracePeriod=600 Oct 14 07:48:03 crc kubenswrapper[5058]: E1014 07:48:03.781312 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:48:03 crc kubenswrapper[5058]: I1014 07:48:03.851716 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" exitCode=0 Oct 14 07:48:03 crc kubenswrapper[5058]: I1014 07:48:03.851763 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc"} Oct 14 07:48:03 crc kubenswrapper[5058]: I1014 07:48:03.851817 5058 scope.go:117] "RemoveContainer" containerID="37265925e6cf57fe185a2b5cb88e1b9004ce10b93db3a44bcabb685050412451" Oct 14 07:48:03 crc kubenswrapper[5058]: I1014 07:48:03.852952 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:48:03 crc kubenswrapper[5058]: E1014 07:48:03.853471 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:48:17 crc kubenswrapper[5058]: I1014 07:48:17.790869 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:48:17 crc kubenswrapper[5058]: E1014 07:48:17.791855 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:48:32 crc kubenswrapper[5058]: I1014 07:48:32.798207 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:48:32 crc kubenswrapper[5058]: E1014 07:48:32.799303 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:48:46 crc kubenswrapper[5058]: I1014 07:48:46.790411 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:48:46 crc kubenswrapper[5058]: E1014 07:48:46.791267 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:48:59 crc kubenswrapper[5058]: I1014 07:48:59.789651 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:48:59 crc kubenswrapper[5058]: E1014 07:48:59.790587 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:49:14 crc kubenswrapper[5058]: I1014 07:49:14.790883 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:49:14 crc kubenswrapper[5058]: E1014 07:49:14.791785 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:49:29 crc kubenswrapper[5058]: I1014 07:49:29.789726 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:49:29 crc kubenswrapper[5058]: E1014 07:49:29.790598 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:49:43 crc kubenswrapper[5058]: I1014 07:49:43.790314 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:49:43 crc kubenswrapper[5058]: E1014 07:49:43.791557 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:49:54 crc kubenswrapper[5058]: I1014 07:49:54.790284 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:49:54 crc kubenswrapper[5058]: E1014 07:49:54.791165 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:50:06 crc kubenswrapper[5058]: I1014 07:50:06.790491 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:50:06 crc kubenswrapper[5058]: E1014 07:50:06.793632 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:50:18 crc kubenswrapper[5058]: I1014 07:50:18.791364 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:50:18 crc kubenswrapper[5058]: E1014 07:50:18.792698 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:50:29 crc kubenswrapper[5058]: I1014 07:50:29.790393 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:50:29 crc kubenswrapper[5058]: E1014 07:50:29.791533 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:50:43 crc kubenswrapper[5058]: I1014 07:50:43.790376 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:50:43 crc kubenswrapper[5058]: E1014 07:50:43.791089 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:50:54 crc kubenswrapper[5058]: I1014 07:50:54.790372 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:50:54 crc kubenswrapper[5058]: E1014 07:50:54.790996 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:51:09 crc kubenswrapper[5058]: I1014 07:51:09.790012 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:51:09 crc kubenswrapper[5058]: E1014 07:51:09.791219 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:51:23 crc kubenswrapper[5058]: I1014 07:51:23.790256 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:51:23 crc kubenswrapper[5058]: E1014 07:51:23.791442 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:51:37 crc kubenswrapper[5058]: I1014 07:51:37.789554 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:51:37 crc kubenswrapper[5058]: E1014 07:51:37.790625 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:51:49 crc kubenswrapper[5058]: I1014 07:51:49.789697 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:51:49 crc kubenswrapper[5058]: E1014 07:51:49.790376 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:52:04 crc kubenswrapper[5058]: I1014 07:52:04.789392 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:52:04 crc kubenswrapper[5058]: E1014 07:52:04.790316 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:52:16 crc kubenswrapper[5058]: I1014 07:52:16.789679 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:52:16 crc kubenswrapper[5058]: E1014 07:52:16.790608 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:52:30 crc kubenswrapper[5058]: I1014 07:52:30.790393 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:52:30 crc kubenswrapper[5058]: E1014 07:52:30.791174 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:52:45 crc kubenswrapper[5058]: I1014 07:52:45.791220 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:52:45 crc kubenswrapper[5058]: E1014 07:52:45.792081 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:52:56 crc kubenswrapper[5058]: I1014 07:52:56.790174 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:52:56 crc kubenswrapper[5058]: E1014 07:52:56.790827 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:53:09 crc kubenswrapper[5058]: I1014 07:53:09.789747 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:53:10 crc kubenswrapper[5058]: I1014 07:53:10.414200 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"f83b89ac0939a4ff21c5b6dec8923cf6d2702864d580180b0ce28e2248598069"} Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.229228 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hf9jd"] Oct 14 07:53:14 crc kubenswrapper[5058]: E1014 07:53:14.230655 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerName="extract-utilities" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.230672 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerName="extract-utilities" Oct 14 07:53:14 crc kubenswrapper[5058]: E1014 07:53:14.230691 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerName="extract-content" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.230698 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerName="extract-content" Oct 14 07:53:14 crc kubenswrapper[5058]: E1014 07:53:14.230717 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerName="registry-server" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.230725 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerName="registry-server" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.230929 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c1ff5b-7b09-413c-9bce-8b1d87eaff3b" containerName="registry-server" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.232160 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.256578 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hf9jd"] Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.329508 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-utilities\") pod \"certified-operators-hf9jd\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.330204 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxsg\" (UniqueName: \"kubernetes.io/projected/91c94da0-1923-4bf5-a7a0-981316d712a1-kube-api-access-zhxsg\") pod \"certified-operators-hf9jd\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.330368 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-catalog-content\") pod \"certified-operators-hf9jd\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.431233 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-utilities\") pod \"certified-operators-hf9jd\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.431302 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhxsg\" (UniqueName: \"kubernetes.io/projected/91c94da0-1923-4bf5-a7a0-981316d712a1-kube-api-access-zhxsg\") pod \"certified-operators-hf9jd\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.431345 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-catalog-content\") pod \"certified-operators-hf9jd\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.431857 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-catalog-content\") pod \"certified-operators-hf9jd\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.432200 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-utilities\") pod \"certified-operators-hf9jd\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.691829 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhxsg\" (UniqueName: \"kubernetes.io/projected/91c94da0-1923-4bf5-a7a0-981316d712a1-kube-api-access-zhxsg\") pod \"certified-operators-hf9jd\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:14 crc kubenswrapper[5058]: I1014 07:53:14.862481 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:15 crc kubenswrapper[5058]: I1014 07:53:15.328581 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hf9jd"] Oct 14 07:53:15 crc kubenswrapper[5058]: W1014 07:53:15.330446 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91c94da0_1923_4bf5_a7a0_981316d712a1.slice/crio-bfdf8aec7624948090d908b73a7cae9c6c0d456ebfa24d237217d0d9104a1672 WatchSource:0}: Error finding container bfdf8aec7624948090d908b73a7cae9c6c0d456ebfa24d237217d0d9104a1672: Status 404 returned error can't find the container with id bfdf8aec7624948090d908b73a7cae9c6c0d456ebfa24d237217d0d9104a1672 Oct 14 07:53:15 crc kubenswrapper[5058]: I1014 07:53:15.456852 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf9jd" event={"ID":"91c94da0-1923-4bf5-a7a0-981316d712a1","Type":"ContainerStarted","Data":"bfdf8aec7624948090d908b73a7cae9c6c0d456ebfa24d237217d0d9104a1672"} Oct 14 07:53:16 crc kubenswrapper[5058]: I1014 07:53:16.471247 5058 generic.go:334] "Generic (PLEG): container finished" podID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerID="f9a53e8b7fb5312b654ebe67cd1e68015ef7530d4361260cdbe06649075d903f" exitCode=0 Oct 14 07:53:16 crc kubenswrapper[5058]: I1014 07:53:16.471615 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf9jd" event={"ID":"91c94da0-1923-4bf5-a7a0-981316d712a1","Type":"ContainerDied","Data":"f9a53e8b7fb5312b654ebe67cd1e68015ef7530d4361260cdbe06649075d903f"} Oct 14 07:53:16 crc kubenswrapper[5058]: I1014 07:53:16.480241 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 07:53:18 crc kubenswrapper[5058]: I1014 07:53:18.491684 5058 generic.go:334] "Generic (PLEG): container finished" podID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerID="9150e5db249f0e8cf4242775340bc80cf9e313eede022541038579d717882aae" exitCode=0 Oct 14 07:53:18 crc kubenswrapper[5058]: I1014 07:53:18.491741 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf9jd" event={"ID":"91c94da0-1923-4bf5-a7a0-981316d712a1","Type":"ContainerDied","Data":"9150e5db249f0e8cf4242775340bc80cf9e313eede022541038579d717882aae"} Oct 14 07:53:19 crc kubenswrapper[5058]: I1014 07:53:19.504487 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf9jd" event={"ID":"91c94da0-1923-4bf5-a7a0-981316d712a1","Type":"ContainerStarted","Data":"7f5a27c6462c388e230237795bd93e5303eabad44075c716ed590ddb16523f74"} Oct 14 07:53:19 crc kubenswrapper[5058]: I1014 07:53:19.531563 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hf9jd" podStartSLOduration=2.803672388 podStartE2EDuration="5.531540721s" podCreationTimestamp="2025-10-14 07:53:14 +0000 UTC" firstStartedPulling="2025-10-14 07:53:16.476776779 +0000 UTC m=+3944.387860635" lastFinishedPulling="2025-10-14 07:53:19.204645152 +0000 UTC m=+3947.115728968" observedRunningTime="2025-10-14 07:53:19.527704983 +0000 UTC m=+3947.438788819" watchObservedRunningTime="2025-10-14 07:53:19.531540721 +0000 UTC m=+3947.442624557" Oct 14 07:53:24 crc kubenswrapper[5058]: I1014 07:53:24.863384 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:24 crc kubenswrapper[5058]: I1014 07:53:24.864388 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:24 crc kubenswrapper[5058]: I1014 07:53:24.936844 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:25 crc kubenswrapper[5058]: I1014 07:53:25.645214 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:25 crc kubenswrapper[5058]: I1014 07:53:25.713722 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hf9jd"] Oct 14 07:53:27 crc kubenswrapper[5058]: I1014 07:53:27.605621 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hf9jd" podUID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerName="registry-server" containerID="cri-o://7f5a27c6462c388e230237795bd93e5303eabad44075c716ed590ddb16523f74" gracePeriod=2 Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.614737 5058 generic.go:334] "Generic (PLEG): container finished" podID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerID="7f5a27c6462c388e230237795bd93e5303eabad44075c716ed590ddb16523f74" exitCode=0 Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.614829 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf9jd" event={"ID":"91c94da0-1923-4bf5-a7a0-981316d712a1","Type":"ContainerDied","Data":"7f5a27c6462c388e230237795bd93e5303eabad44075c716ed590ddb16523f74"} Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.615120 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf9jd" event={"ID":"91c94da0-1923-4bf5-a7a0-981316d712a1","Type":"ContainerDied","Data":"bfdf8aec7624948090d908b73a7cae9c6c0d456ebfa24d237217d0d9104a1672"} Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.615137 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfdf8aec7624948090d908b73a7cae9c6c0d456ebfa24d237217d0d9104a1672" Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.675360 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.771331 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-utilities\") pod \"91c94da0-1923-4bf5-a7a0-981316d712a1\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.771371 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhxsg\" (UniqueName: \"kubernetes.io/projected/91c94da0-1923-4bf5-a7a0-981316d712a1-kube-api-access-zhxsg\") pod \"91c94da0-1923-4bf5-a7a0-981316d712a1\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.771472 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-catalog-content\") pod \"91c94da0-1923-4bf5-a7a0-981316d712a1\" (UID: \"91c94da0-1923-4bf5-a7a0-981316d712a1\") " Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.772647 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-utilities" (OuterVolumeSpecName: "utilities") pod "91c94da0-1923-4bf5-a7a0-981316d712a1" (UID: "91c94da0-1923-4bf5-a7a0-981316d712a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.778470 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c94da0-1923-4bf5-a7a0-981316d712a1-kube-api-access-zhxsg" (OuterVolumeSpecName: "kube-api-access-zhxsg") pod "91c94da0-1923-4bf5-a7a0-981316d712a1" (UID: "91c94da0-1923-4bf5-a7a0-981316d712a1"). InnerVolumeSpecName "kube-api-access-zhxsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.823646 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91c94da0-1923-4bf5-a7a0-981316d712a1" (UID: "91c94da0-1923-4bf5-a7a0-981316d712a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.873686 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.873732 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91c94da0-1923-4bf5-a7a0-981316d712a1-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:53:28 crc kubenswrapper[5058]: I1014 07:53:28.873753 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhxsg\" (UniqueName: \"kubernetes.io/projected/91c94da0-1923-4bf5-a7a0-981316d712a1-kube-api-access-zhxsg\") on node \"crc\" DevicePath \"\"" Oct 14 07:53:29 crc kubenswrapper[5058]: I1014 07:53:29.634815 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf9jd" Oct 14 07:53:29 crc kubenswrapper[5058]: I1014 07:53:29.682747 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hf9jd"] Oct 14 07:53:29 crc kubenswrapper[5058]: I1014 07:53:29.692277 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hf9jd"] Oct 14 07:53:30 crc kubenswrapper[5058]: I1014 07:53:30.802433 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c94da0-1923-4bf5-a7a0-981316d712a1" path="/var/lib/kubelet/pods/91c94da0-1923-4bf5-a7a0-981316d712a1/volumes" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.252177 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9c272"] Oct 14 07:54:05 crc kubenswrapper[5058]: E1014 07:54:05.252953 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerName="extract-content" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.252964 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerName="extract-content" Oct 14 07:54:05 crc kubenswrapper[5058]: E1014 07:54:05.252974 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerName="extract-utilities" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.252981 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerName="extract-utilities" Oct 14 07:54:05 crc kubenswrapper[5058]: E1014 07:54:05.252997 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerName="registry-server" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.253002 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerName="registry-server" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.253158 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c94da0-1923-4bf5-a7a0-981316d712a1" containerName="registry-server" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.254126 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.277427 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9c272"] Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.337997 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56dnc\" (UniqueName: \"kubernetes.io/projected/b15382ce-fb54-4540-80dc-59a9210b6db8-kube-api-access-56dnc\") pod \"redhat-operators-9c272\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.338083 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-catalog-content\") pod \"redhat-operators-9c272\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.338119 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-utilities\") pod \"redhat-operators-9c272\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.439206 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56dnc\" (UniqueName: \"kubernetes.io/projected/b15382ce-fb54-4540-80dc-59a9210b6db8-kube-api-access-56dnc\") pod \"redhat-operators-9c272\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.439545 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-catalog-content\") pod \"redhat-operators-9c272\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.439671 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-utilities\") pod \"redhat-operators-9c272\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.440084 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-catalog-content\") pod \"redhat-operators-9c272\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.440316 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-utilities\") pod \"redhat-operators-9c272\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.469114 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56dnc\" (UniqueName: \"kubernetes.io/projected/b15382ce-fb54-4540-80dc-59a9210b6db8-kube-api-access-56dnc\") pod \"redhat-operators-9c272\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:05 crc kubenswrapper[5058]: I1014 07:54:05.577386 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:06 crc kubenswrapper[5058]: I1014 07:54:06.035119 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9c272"] Oct 14 07:54:06 crc kubenswrapper[5058]: I1014 07:54:06.998971 5058 generic.go:334] "Generic (PLEG): container finished" podID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerID="b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997" exitCode=0 Oct 14 07:54:06 crc kubenswrapper[5058]: I1014 07:54:06.999034 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c272" event={"ID":"b15382ce-fb54-4540-80dc-59a9210b6db8","Type":"ContainerDied","Data":"b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997"} Oct 14 07:54:06 crc kubenswrapper[5058]: I1014 07:54:06.999430 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c272" event={"ID":"b15382ce-fb54-4540-80dc-59a9210b6db8","Type":"ContainerStarted","Data":"fa8d9cddc746a8eb9148887eef3b2fd7c722b0852efafe5c4f1303c82376aa3c"} Oct 14 07:54:08 crc kubenswrapper[5058]: I1014 07:54:08.012214 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c272" event={"ID":"b15382ce-fb54-4540-80dc-59a9210b6db8","Type":"ContainerStarted","Data":"567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a"} Oct 14 07:54:09 crc kubenswrapper[5058]: I1014 07:54:09.023238 5058 generic.go:334] "Generic (PLEG): container finished" podID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerID="567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a" exitCode=0 Oct 14 07:54:09 crc kubenswrapper[5058]: I1014 07:54:09.023291 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c272" event={"ID":"b15382ce-fb54-4540-80dc-59a9210b6db8","Type":"ContainerDied","Data":"567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a"} Oct 14 07:54:10 crc kubenswrapper[5058]: I1014 07:54:10.052789 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c272" event={"ID":"b15382ce-fb54-4540-80dc-59a9210b6db8","Type":"ContainerStarted","Data":"8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00"} Oct 14 07:54:10 crc kubenswrapper[5058]: I1014 07:54:10.078285 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9c272" podStartSLOduration=2.613905711 podStartE2EDuration="5.078265795s" podCreationTimestamp="2025-10-14 07:54:05 +0000 UTC" firstStartedPulling="2025-10-14 07:54:07.003961783 +0000 UTC m=+3994.915045589" lastFinishedPulling="2025-10-14 07:54:09.468321837 +0000 UTC m=+3997.379405673" observedRunningTime="2025-10-14 07:54:10.076026352 +0000 UTC m=+3997.987110188" watchObservedRunningTime="2025-10-14 07:54:10.078265795 +0000 UTC m=+3997.989349611" Oct 14 07:54:15 crc kubenswrapper[5058]: I1014 07:54:15.578119 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:15 crc kubenswrapper[5058]: I1014 07:54:15.578688 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:15 crc kubenswrapper[5058]: I1014 07:54:15.653391 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:16 crc kubenswrapper[5058]: I1014 07:54:16.185576 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:16 crc kubenswrapper[5058]: I1014 07:54:16.245318 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9c272"] Oct 14 07:54:18 crc kubenswrapper[5058]: I1014 07:54:18.126582 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9c272" podUID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerName="registry-server" containerID="cri-o://8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00" gracePeriod=2 Oct 14 07:54:18 crc kubenswrapper[5058]: I1014 07:54:18.581210 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:18 crc kubenswrapper[5058]: I1014 07:54:18.773375 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-utilities\") pod \"b15382ce-fb54-4540-80dc-59a9210b6db8\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " Oct 14 07:54:18 crc kubenswrapper[5058]: I1014 07:54:18.773424 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-catalog-content\") pod \"b15382ce-fb54-4540-80dc-59a9210b6db8\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " Oct 14 07:54:18 crc kubenswrapper[5058]: I1014 07:54:18.773537 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56dnc\" (UniqueName: \"kubernetes.io/projected/b15382ce-fb54-4540-80dc-59a9210b6db8-kube-api-access-56dnc\") pod \"b15382ce-fb54-4540-80dc-59a9210b6db8\" (UID: \"b15382ce-fb54-4540-80dc-59a9210b6db8\") " Oct 14 07:54:18 crc kubenswrapper[5058]: I1014 07:54:18.777208 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-utilities" (OuterVolumeSpecName: "utilities") pod "b15382ce-fb54-4540-80dc-59a9210b6db8" (UID: "b15382ce-fb54-4540-80dc-59a9210b6db8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:54:18 crc kubenswrapper[5058]: I1014 07:54:18.779723 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15382ce-fb54-4540-80dc-59a9210b6db8-kube-api-access-56dnc" (OuterVolumeSpecName: "kube-api-access-56dnc") pod "b15382ce-fb54-4540-80dc-59a9210b6db8" (UID: "b15382ce-fb54-4540-80dc-59a9210b6db8"). InnerVolumeSpecName "kube-api-access-56dnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:54:18 crc kubenswrapper[5058]: I1014 07:54:18.875507 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:54:18 crc kubenswrapper[5058]: I1014 07:54:18.875559 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56dnc\" (UniqueName: \"kubernetes.io/projected/b15382ce-fb54-4540-80dc-59a9210b6db8-kube-api-access-56dnc\") on node \"crc\" DevicePath \"\"" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.141768 5058 generic.go:334] "Generic (PLEG): container finished" podID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerID="8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00" exitCode=0 Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.141869 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c272" event={"ID":"b15382ce-fb54-4540-80dc-59a9210b6db8","Type":"ContainerDied","Data":"8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00"} Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.141935 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c272" event={"ID":"b15382ce-fb54-4540-80dc-59a9210b6db8","Type":"ContainerDied","Data":"fa8d9cddc746a8eb9148887eef3b2fd7c722b0852efafe5c4f1303c82376aa3c"} Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.141966 5058 scope.go:117] "RemoveContainer" containerID="8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.141896 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c272" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.173831 5058 scope.go:117] "RemoveContainer" containerID="567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.209061 5058 scope.go:117] "RemoveContainer" containerID="b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.258907 5058 scope.go:117] "RemoveContainer" containerID="8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00" Oct 14 07:54:19 crc kubenswrapper[5058]: E1014 07:54:19.259634 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00\": container with ID starting with 8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00 not found: ID does not exist" containerID="8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.259712 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00"} err="failed to get container status \"8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00\": rpc error: code = NotFound desc = could not find container \"8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00\": container with ID starting with 8dc73816e18114364441951e3b6f90ad3f1fd610a02d0fd2172025e759791f00 not found: ID does not exist" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.259749 5058 scope.go:117] "RemoveContainer" containerID="567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a" Oct 14 07:54:19 crc kubenswrapper[5058]: E1014 07:54:19.260683 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a\": container with ID starting with 567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a not found: ID does not exist" containerID="567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.260835 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a"} err="failed to get container status \"567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a\": rpc error: code = NotFound desc = could not find container \"567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a\": container with ID starting with 567c8a09a05d4669df41fd68176171828caa92a9bb8cdff6749bb9c84309e36a not found: ID does not exist" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.260898 5058 scope.go:117] "RemoveContainer" containerID="b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997" Oct 14 07:54:19 crc kubenswrapper[5058]: E1014 07:54:19.261413 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997\": container with ID starting with b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997 not found: ID does not exist" containerID="b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.261456 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997"} err="failed to get container status \"b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997\": rpc error: code = NotFound desc = could not find container \"b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997\": container with ID starting with b5546f156e111a57defd6eeaf9055be361b4cbe710f34e0e41b874cfd595b997 not found: ID does not exist" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.555122 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b15382ce-fb54-4540-80dc-59a9210b6db8" (UID: "b15382ce-fb54-4540-80dc-59a9210b6db8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.588386 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b15382ce-fb54-4540-80dc-59a9210b6db8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.789775 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9c272"] Oct 14 07:54:19 crc kubenswrapper[5058]: I1014 07:54:19.801642 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9c272"] Oct 14 07:54:20 crc kubenswrapper[5058]: I1014 07:54:20.804437 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15382ce-fb54-4540-80dc-59a9210b6db8" path="/var/lib/kubelet/pods/b15382ce-fb54-4540-80dc-59a9210b6db8/volumes" Oct 14 07:55:33 crc kubenswrapper[5058]: I1014 07:55:33.656063 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:55:33 crc kubenswrapper[5058]: I1014 07:55:33.656795 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:56:03 crc kubenswrapper[5058]: I1014 07:56:03.655660 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:56:03 crc kubenswrapper[5058]: I1014 07:56:03.657113 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:56:33 crc kubenswrapper[5058]: I1014 07:56:33.656383 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:56:33 crc kubenswrapper[5058]: I1014 07:56:33.658683 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:56:33 crc kubenswrapper[5058]: I1014 07:56:33.658868 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:56:33 crc kubenswrapper[5058]: I1014 07:56:33.659553 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f83b89ac0939a4ff21c5b6dec8923cf6d2702864d580180b0ce28e2248598069"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:56:33 crc kubenswrapper[5058]: I1014 07:56:33.659737 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://f83b89ac0939a4ff21c5b6dec8923cf6d2702864d580180b0ce28e2248598069" gracePeriod=600 Oct 14 07:56:34 crc kubenswrapper[5058]: I1014 07:56:34.299523 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="f83b89ac0939a4ff21c5b6dec8923cf6d2702864d580180b0ce28e2248598069" exitCode=0 Oct 14 07:56:34 crc kubenswrapper[5058]: I1014 07:56:34.299683 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"f83b89ac0939a4ff21c5b6dec8923cf6d2702864d580180b0ce28e2248598069"} Oct 14 07:56:34 crc kubenswrapper[5058]: I1014 07:56:34.300080 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b"} Oct 14 07:56:34 crc kubenswrapper[5058]: I1014 07:56:34.300103 5058 scope.go:117] "RemoveContainer" containerID="00e0a09d230e796725d91250a471459a03f425255dd93b5f6a13b6834cbaf6fc" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.303508 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pjf87"] Oct 14 07:57:19 crc kubenswrapper[5058]: E1014 07:57:19.304628 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerName="extract-content" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.304655 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerName="extract-content" Oct 14 07:57:19 crc kubenswrapper[5058]: E1014 07:57:19.304676 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerName="extract-utilities" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.304688 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerName="extract-utilities" Oct 14 07:57:19 crc kubenswrapper[5058]: E1014 07:57:19.304734 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerName="registry-server" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.304747 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerName="registry-server" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.305058 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15382ce-fb54-4540-80dc-59a9210b6db8" containerName="registry-server" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.306961 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.308351 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjf87"] Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.355633 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-utilities\") pod \"redhat-marketplace-pjf87\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.355704 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdwq\" (UniqueName: \"kubernetes.io/projected/9d7a102b-9706-4bff-be6d-aacc45f41f2e-kube-api-access-gmdwq\") pod \"redhat-marketplace-pjf87\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.355782 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-catalog-content\") pod \"redhat-marketplace-pjf87\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.456659 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-utilities\") pod \"redhat-marketplace-pjf87\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.457029 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmdwq\" (UniqueName: \"kubernetes.io/projected/9d7a102b-9706-4bff-be6d-aacc45f41f2e-kube-api-access-gmdwq\") pod \"redhat-marketplace-pjf87\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.457132 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-catalog-content\") pod \"redhat-marketplace-pjf87\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.457772 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-catalog-content\") pod \"redhat-marketplace-pjf87\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.458605 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-utilities\") pod \"redhat-marketplace-pjf87\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.478766 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmdwq\" (UniqueName: \"kubernetes.io/projected/9d7a102b-9706-4bff-be6d-aacc45f41f2e-kube-api-access-gmdwq\") pod \"redhat-marketplace-pjf87\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:19 crc kubenswrapper[5058]: I1014 07:57:19.631595 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:20 crc kubenswrapper[5058]: I1014 07:57:20.104966 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjf87"] Oct 14 07:57:20 crc kubenswrapper[5058]: I1014 07:57:20.783334 5058 generic.go:334] "Generic (PLEG): container finished" podID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerID="7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230" exitCode=0 Oct 14 07:57:20 crc kubenswrapper[5058]: I1014 07:57:20.783488 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjf87" event={"ID":"9d7a102b-9706-4bff-be6d-aacc45f41f2e","Type":"ContainerDied","Data":"7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230"} Oct 14 07:57:20 crc kubenswrapper[5058]: I1014 07:57:20.785302 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjf87" event={"ID":"9d7a102b-9706-4bff-be6d-aacc45f41f2e","Type":"ContainerStarted","Data":"fb29fd8d5f3d9b4873d7e8a7d1c613656a7b6838775aadcfc74cefe8fba2c4af"} Oct 14 07:57:21 crc kubenswrapper[5058]: I1014 07:57:21.795955 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjf87" event={"ID":"9d7a102b-9706-4bff-be6d-aacc45f41f2e","Type":"ContainerStarted","Data":"f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a"} Oct 14 07:57:22 crc kubenswrapper[5058]: I1014 07:57:22.807223 5058 generic.go:334] "Generic (PLEG): container finished" podID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerID="f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a" exitCode=0 Oct 14 07:57:22 crc kubenswrapper[5058]: I1014 07:57:22.807266 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjf87" event={"ID":"9d7a102b-9706-4bff-be6d-aacc45f41f2e","Type":"ContainerDied","Data":"f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a"} Oct 14 07:57:23 crc kubenswrapper[5058]: I1014 07:57:23.818406 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjf87" event={"ID":"9d7a102b-9706-4bff-be6d-aacc45f41f2e","Type":"ContainerStarted","Data":"02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672"} Oct 14 07:57:23 crc kubenswrapper[5058]: I1014 07:57:23.847295 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pjf87" podStartSLOduration=2.165196942 podStartE2EDuration="4.847273391s" podCreationTimestamp="2025-10-14 07:57:19 +0000 UTC" firstStartedPulling="2025-10-14 07:57:20.785174963 +0000 UTC m=+4188.696258769" lastFinishedPulling="2025-10-14 07:57:23.467251392 +0000 UTC m=+4191.378335218" observedRunningTime="2025-10-14 07:57:23.841763606 +0000 UTC m=+4191.752847462" watchObservedRunningTime="2025-10-14 07:57:23.847273391 +0000 UTC m=+4191.758357207" Oct 14 07:57:28 crc kubenswrapper[5058]: I1014 07:57:28.826334 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v5hxh"] Oct 14 07:57:28 crc kubenswrapper[5058]: I1014 07:57:28.829809 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:28 crc kubenswrapper[5058]: I1014 07:57:28.842098 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5hxh"] Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.011189 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh489\" (UniqueName: \"kubernetes.io/projected/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-kube-api-access-kh489\") pod \"community-operators-v5hxh\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.011510 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-catalog-content\") pod \"community-operators-v5hxh\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.011548 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-utilities\") pod \"community-operators-v5hxh\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.113459 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-catalog-content\") pod \"community-operators-v5hxh\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.113527 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-utilities\") pod \"community-operators-v5hxh\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.113615 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh489\" (UniqueName: \"kubernetes.io/projected/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-kube-api-access-kh489\") pod \"community-operators-v5hxh\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.114054 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-utilities\") pod \"community-operators-v5hxh\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.114290 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-catalog-content\") pod \"community-operators-v5hxh\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.134926 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh489\" (UniqueName: \"kubernetes.io/projected/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-kube-api-access-kh489\") pod \"community-operators-v5hxh\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.158732 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.631834 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.632177 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.648650 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5hxh"] Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.701205 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.872953 5058 generic.go:334] "Generic (PLEG): container finished" podID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerID="dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e" exitCode=0 Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.873010 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5hxh" event={"ID":"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d","Type":"ContainerDied","Data":"dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e"} Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.873347 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5hxh" event={"ID":"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d","Type":"ContainerStarted","Data":"1f603b489efa04beef65b7dbbaf8671fc29bb8b89f72030847425d3554d3b94d"} Oct 14 07:57:29 crc kubenswrapper[5058]: I1014 07:57:29.914543 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:31 crc kubenswrapper[5058]: I1014 07:57:31.894878 5058 generic.go:334] "Generic (PLEG): container finished" podID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerID="f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b" exitCode=0 Oct 14 07:57:31 crc kubenswrapper[5058]: I1014 07:57:31.894994 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5hxh" event={"ID":"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d","Type":"ContainerDied","Data":"f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b"} Oct 14 07:57:31 crc kubenswrapper[5058]: I1014 07:57:31.982536 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjf87"] Oct 14 07:57:31 crc kubenswrapper[5058]: I1014 07:57:31.982969 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pjf87" podUID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerName="registry-server" containerID="cri-o://02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672" gracePeriod=2 Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.416147 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.569740 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-catalog-content\") pod \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.570148 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-utilities\") pod \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.570196 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmdwq\" (UniqueName: \"kubernetes.io/projected/9d7a102b-9706-4bff-be6d-aacc45f41f2e-kube-api-access-gmdwq\") pod \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\" (UID: \"9d7a102b-9706-4bff-be6d-aacc45f41f2e\") " Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.571621 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-utilities" (OuterVolumeSpecName: "utilities") pod "9d7a102b-9706-4bff-be6d-aacc45f41f2e" (UID: "9d7a102b-9706-4bff-be6d-aacc45f41f2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.574875 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d7a102b-9706-4bff-be6d-aacc45f41f2e-kube-api-access-gmdwq" (OuterVolumeSpecName: "kube-api-access-gmdwq") pod "9d7a102b-9706-4bff-be6d-aacc45f41f2e" (UID: "9d7a102b-9706-4bff-be6d-aacc45f41f2e"). InnerVolumeSpecName "kube-api-access-gmdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.592205 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d7a102b-9706-4bff-be6d-aacc45f41f2e" (UID: "9d7a102b-9706-4bff-be6d-aacc45f41f2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.671992 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.672028 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d7a102b-9706-4bff-be6d-aacc45f41f2e-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.672041 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmdwq\" (UniqueName: \"kubernetes.io/projected/9d7a102b-9706-4bff-be6d-aacc45f41f2e-kube-api-access-gmdwq\") on node \"crc\" DevicePath \"\"" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.904058 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5hxh" event={"ID":"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d","Type":"ContainerStarted","Data":"771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee"} Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.910369 5058 generic.go:334] "Generic (PLEG): container finished" podID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerID="02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672" exitCode=0 Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.910423 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjf87" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.910448 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjf87" event={"ID":"9d7a102b-9706-4bff-be6d-aacc45f41f2e","Type":"ContainerDied","Data":"02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672"} Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.910870 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjf87" event={"ID":"9d7a102b-9706-4bff-be6d-aacc45f41f2e","Type":"ContainerDied","Data":"fb29fd8d5f3d9b4873d7e8a7d1c613656a7b6838775aadcfc74cefe8fba2c4af"} Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.910896 5058 scope.go:117] "RemoveContainer" containerID="02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.924654 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v5hxh" podStartSLOduration=2.297640227 podStartE2EDuration="4.924417025s" podCreationTimestamp="2025-10-14 07:57:28 +0000 UTC" firstStartedPulling="2025-10-14 07:57:29.874645265 +0000 UTC m=+4197.785729071" lastFinishedPulling="2025-10-14 07:57:32.501422043 +0000 UTC m=+4200.412505869" observedRunningTime="2025-10-14 07:57:32.922947993 +0000 UTC m=+4200.834031819" watchObservedRunningTime="2025-10-14 07:57:32.924417025 +0000 UTC m=+4200.835500851" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.939581 5058 scope.go:117] "RemoveContainer" containerID="f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.946106 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjf87"] Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.952476 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjf87"] Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.960997 5058 scope.go:117] "RemoveContainer" containerID="7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.979106 5058 scope.go:117] "RemoveContainer" containerID="02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672" Oct 14 07:57:32 crc kubenswrapper[5058]: E1014 07:57:32.979610 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672\": container with ID starting with 02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672 not found: ID does not exist" containerID="02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.979659 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672"} err="failed to get container status \"02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672\": rpc error: code = NotFound desc = could not find container \"02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672\": container with ID starting with 02784cfae576d397ffc4a3e9db3e509e004fa07c2b83e4a538ccc4be78d31672 not found: ID does not exist" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.979695 5058 scope.go:117] "RemoveContainer" containerID="f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a" Oct 14 07:57:32 crc kubenswrapper[5058]: E1014 07:57:32.980207 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a\": container with ID starting with f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a not found: ID does not exist" containerID="f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.980251 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a"} err="failed to get container status \"f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a\": rpc error: code = NotFound desc = could not find container \"f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a\": container with ID starting with f1f4c8946f93175cf78cfb4e74eec4c0419e61b8e4fee305d739ab04ea0bae5a not found: ID does not exist" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.980276 5058 scope.go:117] "RemoveContainer" containerID="7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230" Oct 14 07:57:32 crc kubenswrapper[5058]: E1014 07:57:32.980632 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230\": container with ID starting with 7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230 not found: ID does not exist" containerID="7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230" Oct 14 07:57:32 crc kubenswrapper[5058]: I1014 07:57:32.980661 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230"} err="failed to get container status \"7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230\": rpc error: code = NotFound desc = could not find container \"7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230\": container with ID starting with 7d5f067325a5adf1dc2eb940e53308647d7934a14df6955071b38e5afc5f4230 not found: ID does not exist" Oct 14 07:57:34 crc kubenswrapper[5058]: I1014 07:57:34.800962 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" path="/var/lib/kubelet/pods/9d7a102b-9706-4bff-be6d-aacc45f41f2e/volumes" Oct 14 07:57:39 crc kubenswrapper[5058]: I1014 07:57:39.159041 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:39 crc kubenswrapper[5058]: I1014 07:57:39.159701 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:39 crc kubenswrapper[5058]: I1014 07:57:39.218069 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:40 crc kubenswrapper[5058]: I1014 07:57:40.038935 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:40 crc kubenswrapper[5058]: I1014 07:57:40.107466 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5hxh"] Oct 14 07:57:41 crc kubenswrapper[5058]: I1014 07:57:41.997688 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v5hxh" podUID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerName="registry-server" containerID="cri-o://771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee" gracePeriod=2 Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.402502 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.423960 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-catalog-content\") pod \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.424363 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-utilities\") pod \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.424444 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh489\" (UniqueName: \"kubernetes.io/projected/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-kube-api-access-kh489\") pod \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\" (UID: \"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d\") " Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.426975 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-utilities" (OuterVolumeSpecName: "utilities") pod "b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" (UID: "b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.431911 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-kube-api-access-kh489" (OuterVolumeSpecName: "kube-api-access-kh489") pod "b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" (UID: "b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d"). InnerVolumeSpecName "kube-api-access-kh489". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.487994 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" (UID: "b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.527201 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.527617 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 07:57:42 crc kubenswrapper[5058]: I1014 07:57:42.527843 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh489\" (UniqueName: \"kubernetes.io/projected/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d-kube-api-access-kh489\") on node \"crc\" DevicePath \"\"" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.011156 5058 generic.go:334] "Generic (PLEG): container finished" podID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerID="771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee" exitCode=0 Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.011240 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5hxh" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.011238 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5hxh" event={"ID":"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d","Type":"ContainerDied","Data":"771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee"} Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.011342 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5hxh" event={"ID":"b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d","Type":"ContainerDied","Data":"1f603b489efa04beef65b7dbbaf8671fc29bb8b89f72030847425d3554d3b94d"} Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.011386 5058 scope.go:117] "RemoveContainer" containerID="771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.054813 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5hxh"] Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.064881 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v5hxh"] Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.065253 5058 scope.go:117] "RemoveContainer" containerID="f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.092953 5058 scope.go:117] "RemoveContainer" containerID="dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.127819 5058 scope.go:117] "RemoveContainer" containerID="771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee" Oct 14 07:57:43 crc kubenswrapper[5058]: E1014 07:57:43.128365 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee\": container with ID starting with 771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee not found: ID does not exist" containerID="771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.128418 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee"} err="failed to get container status \"771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee\": rpc error: code = NotFound desc = could not find container \"771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee\": container with ID starting with 771a580398769d06381d90fe4524bda8cd94c94e4c54f31ebec18892eaee40ee not found: ID does not exist" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.128449 5058 scope.go:117] "RemoveContainer" containerID="f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b" Oct 14 07:57:43 crc kubenswrapper[5058]: E1014 07:57:43.129144 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b\": container with ID starting with f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b not found: ID does not exist" containerID="f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.129263 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b"} err="failed to get container status \"f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b\": rpc error: code = NotFound desc = could not find container \"f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b\": container with ID starting with f26121388bbd3ea72b31323b4d9767e1ba60117ae4f461c449eea1f2b0f2a03b not found: ID does not exist" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.129361 5058 scope.go:117] "RemoveContainer" containerID="dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e" Oct 14 07:57:43 crc kubenswrapper[5058]: E1014 07:57:43.129859 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e\": container with ID starting with dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e not found: ID does not exist" containerID="dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e" Oct 14 07:57:43 crc kubenswrapper[5058]: I1014 07:57:43.129895 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e"} err="failed to get container status \"dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e\": rpc error: code = NotFound desc = could not find container \"dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e\": container with ID starting with dedc55c3eb5b15d9218c5b9edd751f09f67ed5d71e8d94e0d4c82e9e90e00b8e not found: ID does not exist" Oct 14 07:57:44 crc kubenswrapper[5058]: I1014 07:57:44.807359 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" path="/var/lib/kubelet/pods/b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d/volumes" Oct 14 07:58:33 crc kubenswrapper[5058]: I1014 07:58:33.656408 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:58:33 crc kubenswrapper[5058]: I1014 07:58:33.656971 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:59:03 crc kubenswrapper[5058]: I1014 07:59:03.655928 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:59:03 crc kubenswrapper[5058]: I1014 07:59:03.656475 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:59:33 crc kubenswrapper[5058]: I1014 07:59:33.656625 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 07:59:33 crc kubenswrapper[5058]: I1014 07:59:33.657461 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 07:59:33 crc kubenswrapper[5058]: I1014 07:59:33.657542 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 07:59:33 crc kubenswrapper[5058]: I1014 07:59:33.658476 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 07:59:33 crc kubenswrapper[5058]: I1014 07:59:33.658592 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" gracePeriod=600 Oct 14 07:59:33 crc kubenswrapper[5058]: E1014 07:59:33.788227 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:59:34 crc kubenswrapper[5058]: I1014 07:59:34.030570 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" exitCode=0 Oct 14 07:59:34 crc kubenswrapper[5058]: I1014 07:59:34.030619 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b"} Oct 14 07:59:34 crc kubenswrapper[5058]: I1014 07:59:34.030663 5058 scope.go:117] "RemoveContainer" containerID="f83b89ac0939a4ff21c5b6dec8923cf6d2702864d580180b0ce28e2248598069" Oct 14 07:59:34 crc kubenswrapper[5058]: I1014 07:59:34.031350 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 07:59:34 crc kubenswrapper[5058]: E1014 07:59:34.031667 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 07:59:38 crc kubenswrapper[5058]: I1014 07:59:38.983728 5058 scope.go:117] "RemoveContainer" containerID="9150e5db249f0e8cf4242775340bc80cf9e313eede022541038579d717882aae" Oct 14 07:59:39 crc kubenswrapper[5058]: I1014 07:59:39.019742 5058 scope.go:117] "RemoveContainer" containerID="7f5a27c6462c388e230237795bd93e5303eabad44075c716ed590ddb16523f74" Oct 14 07:59:39 crc kubenswrapper[5058]: I1014 07:59:39.064883 5058 scope.go:117] "RemoveContainer" containerID="f9a53e8b7fb5312b654ebe67cd1e68015ef7530d4361260cdbe06649075d903f" Oct 14 07:59:46 crc kubenswrapper[5058]: I1014 07:59:46.790679 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 07:59:46 crc kubenswrapper[5058]: E1014 07:59:46.791936 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.139186 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc"] Oct 14 08:00:00 crc kubenswrapper[5058]: E1014 08:00:00.140030 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerName="extract-content" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.140043 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerName="extract-content" Oct 14 08:00:00 crc kubenswrapper[5058]: E1014 08:00:00.140054 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerName="extract-utilities" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.140060 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerName="extract-utilities" Oct 14 08:00:00 crc kubenswrapper[5058]: E1014 08:00:00.140074 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerName="extract-utilities" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.140080 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerName="extract-utilities" Oct 14 08:00:00 crc kubenswrapper[5058]: E1014 08:00:00.140092 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerName="extract-content" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.140098 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerName="extract-content" Oct 14 08:00:00 crc kubenswrapper[5058]: E1014 08:00:00.140107 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerName="registry-server" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.140112 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerName="registry-server" Oct 14 08:00:00 crc kubenswrapper[5058]: E1014 08:00:00.140125 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerName="registry-server" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.140130 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerName="registry-server" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.140277 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d7a102b-9706-4bff-be6d-aacc45f41f2e" containerName="registry-server" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.140293 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49e0069-d8b8-4f38-b2bf-b7e2dd48cb5d" containerName="registry-server" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.140787 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.144442 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.145572 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.158876 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc"] Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.159728 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3322ac8f-e633-4400-992a-f1f762b6f65c-secret-volume\") pod \"collect-profiles-29340480-gsmrc\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.159848 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl2gn\" (UniqueName: \"kubernetes.io/projected/3322ac8f-e633-4400-992a-f1f762b6f65c-kube-api-access-nl2gn\") pod \"collect-profiles-29340480-gsmrc\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.159904 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3322ac8f-e633-4400-992a-f1f762b6f65c-config-volume\") pod \"collect-profiles-29340480-gsmrc\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.261685 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3322ac8f-e633-4400-992a-f1f762b6f65c-config-volume\") pod \"collect-profiles-29340480-gsmrc\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.261779 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3322ac8f-e633-4400-992a-f1f762b6f65c-secret-volume\") pod \"collect-profiles-29340480-gsmrc\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.261923 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl2gn\" (UniqueName: \"kubernetes.io/projected/3322ac8f-e633-4400-992a-f1f762b6f65c-kube-api-access-nl2gn\") pod \"collect-profiles-29340480-gsmrc\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.263261 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3322ac8f-e633-4400-992a-f1f762b6f65c-config-volume\") pod \"collect-profiles-29340480-gsmrc\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.266877 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3322ac8f-e633-4400-992a-f1f762b6f65c-secret-volume\") pod \"collect-profiles-29340480-gsmrc\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.283250 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl2gn\" (UniqueName: \"kubernetes.io/projected/3322ac8f-e633-4400-992a-f1f762b6f65c-kube-api-access-nl2gn\") pod \"collect-profiles-29340480-gsmrc\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.465994 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:00 crc kubenswrapper[5058]: I1014 08:00:00.876373 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc"] Oct 14 08:00:01 crc kubenswrapper[5058]: I1014 08:00:01.273972 5058 generic.go:334] "Generic (PLEG): container finished" podID="3322ac8f-e633-4400-992a-f1f762b6f65c" containerID="71d45c55e6c5164bfe0a1fcf48e5652ae2cf9fa883656d87f7d39b2a533d6136" exitCode=0 Oct 14 08:00:01 crc kubenswrapper[5058]: I1014 08:00:01.274011 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" event={"ID":"3322ac8f-e633-4400-992a-f1f762b6f65c","Type":"ContainerDied","Data":"71d45c55e6c5164bfe0a1fcf48e5652ae2cf9fa883656d87f7d39b2a533d6136"} Oct 14 08:00:01 crc kubenswrapper[5058]: I1014 08:00:01.274466 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" event={"ID":"3322ac8f-e633-4400-992a-f1f762b6f65c","Type":"ContainerStarted","Data":"37c0d822ce262b36da9f721f9a66b431cbf627aef7180a8b2c76eb34701ab42d"} Oct 14 08:00:01 crc kubenswrapper[5058]: I1014 08:00:01.790353 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:00:01 crc kubenswrapper[5058]: E1014 08:00:01.790680 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.621092 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.791962 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3322ac8f-e633-4400-992a-f1f762b6f65c-secret-volume\") pod \"3322ac8f-e633-4400-992a-f1f762b6f65c\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.792263 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3322ac8f-e633-4400-992a-f1f762b6f65c-config-volume\") pod \"3322ac8f-e633-4400-992a-f1f762b6f65c\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.793423 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl2gn\" (UniqueName: \"kubernetes.io/projected/3322ac8f-e633-4400-992a-f1f762b6f65c-kube-api-access-nl2gn\") pod \"3322ac8f-e633-4400-992a-f1f762b6f65c\" (UID: \"3322ac8f-e633-4400-992a-f1f762b6f65c\") " Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.793289 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3322ac8f-e633-4400-992a-f1f762b6f65c-config-volume" (OuterVolumeSpecName: "config-volume") pod "3322ac8f-e633-4400-992a-f1f762b6f65c" (UID: "3322ac8f-e633-4400-992a-f1f762b6f65c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.795627 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3322ac8f-e633-4400-992a-f1f762b6f65c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.798600 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3322ac8f-e633-4400-992a-f1f762b6f65c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3322ac8f-e633-4400-992a-f1f762b6f65c" (UID: "3322ac8f-e633-4400-992a-f1f762b6f65c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.800055 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3322ac8f-e633-4400-992a-f1f762b6f65c-kube-api-access-nl2gn" (OuterVolumeSpecName: "kube-api-access-nl2gn") pod "3322ac8f-e633-4400-992a-f1f762b6f65c" (UID: "3322ac8f-e633-4400-992a-f1f762b6f65c"). InnerVolumeSpecName "kube-api-access-nl2gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.896434 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl2gn\" (UniqueName: \"kubernetes.io/projected/3322ac8f-e633-4400-992a-f1f762b6f65c-kube-api-access-nl2gn\") on node \"crc\" DevicePath \"\"" Oct 14 08:00:02 crc kubenswrapper[5058]: I1014 08:00:02.896469 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3322ac8f-e633-4400-992a-f1f762b6f65c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 08:00:03 crc kubenswrapper[5058]: I1014 08:00:03.295321 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" event={"ID":"3322ac8f-e633-4400-992a-f1f762b6f65c","Type":"ContainerDied","Data":"37c0d822ce262b36da9f721f9a66b431cbf627aef7180a8b2c76eb34701ab42d"} Oct 14 08:00:03 crc kubenswrapper[5058]: I1014 08:00:03.295856 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37c0d822ce262b36da9f721f9a66b431cbf627aef7180a8b2c76eb34701ab42d" Oct 14 08:00:03 crc kubenswrapper[5058]: I1014 08:00:03.295449 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc" Oct 14 08:00:03 crc kubenswrapper[5058]: I1014 08:00:03.700294 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs"] Oct 14 08:00:03 crc kubenswrapper[5058]: I1014 08:00:03.705201 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340435-pmqfs"] Oct 14 08:00:04 crc kubenswrapper[5058]: I1014 08:00:04.805747 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17f8812-b143-4038-859b-1c4ca7d8ce2b" path="/var/lib/kubelet/pods/b17f8812-b143-4038-859b-1c4ca7d8ce2b/volumes" Oct 14 08:00:13 crc kubenswrapper[5058]: I1014 08:00:13.789870 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:00:13 crc kubenswrapper[5058]: E1014 08:00:13.790782 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:00:28 crc kubenswrapper[5058]: I1014 08:00:28.790082 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:00:28 crc kubenswrapper[5058]: E1014 08:00:28.790689 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:00:39 crc kubenswrapper[5058]: I1014 08:00:39.131285 5058 scope.go:117] "RemoveContainer" containerID="b73128a6dcd19f0af19b21a5c77f653f6f43aa1d0e1778d8ffc918836561bb24" Oct 14 08:00:42 crc kubenswrapper[5058]: I1014 08:00:42.798617 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:00:42 crc kubenswrapper[5058]: E1014 08:00:42.800130 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:00:56 crc kubenswrapper[5058]: I1014 08:00:56.790402 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:00:56 crc kubenswrapper[5058]: E1014 08:00:56.791036 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:01:09 crc kubenswrapper[5058]: I1014 08:01:09.790403 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:01:09 crc kubenswrapper[5058]: E1014 08:01:09.791032 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:01:20 crc kubenswrapper[5058]: I1014 08:01:20.790573 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:01:20 crc kubenswrapper[5058]: E1014 08:01:20.791530 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:01:31 crc kubenswrapper[5058]: I1014 08:01:31.791013 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:01:31 crc kubenswrapper[5058]: E1014 08:01:31.792235 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:01:46 crc kubenswrapper[5058]: I1014 08:01:46.790057 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:01:46 crc kubenswrapper[5058]: E1014 08:01:46.791057 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:01:57 crc kubenswrapper[5058]: I1014 08:01:57.790691 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:01:57 crc kubenswrapper[5058]: E1014 08:01:57.791944 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:02:11 crc kubenswrapper[5058]: I1014 08:02:11.790148 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:02:11 crc kubenswrapper[5058]: E1014 08:02:11.790959 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:02:22 crc kubenswrapper[5058]: I1014 08:02:22.790540 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:02:22 crc kubenswrapper[5058]: E1014 08:02:22.792619 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:02:34 crc kubenswrapper[5058]: I1014 08:02:34.790448 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:02:34 crc kubenswrapper[5058]: E1014 08:02:34.792639 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:02:48 crc kubenswrapper[5058]: I1014 08:02:48.790373 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:02:48 crc kubenswrapper[5058]: E1014 08:02:48.791290 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:03:01 crc kubenswrapper[5058]: I1014 08:03:01.790522 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:03:01 crc kubenswrapper[5058]: E1014 08:03:01.791246 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:03:15 crc kubenswrapper[5058]: I1014 08:03:15.790178 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:03:15 crc kubenswrapper[5058]: E1014 08:03:15.791391 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:03:30 crc kubenswrapper[5058]: I1014 08:03:30.789926 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:03:30 crc kubenswrapper[5058]: E1014 08:03:30.791122 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:03:45 crc kubenswrapper[5058]: I1014 08:03:45.789907 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:03:45 crc kubenswrapper[5058]: E1014 08:03:45.790914 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:03:59 crc kubenswrapper[5058]: I1014 08:03:59.791383 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:03:59 crc kubenswrapper[5058]: E1014 08:03:59.792333 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:04:14 crc kubenswrapper[5058]: I1014 08:04:14.790518 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:04:14 crc kubenswrapper[5058]: E1014 08:04:14.791498 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:04:19 crc kubenswrapper[5058]: I1014 08:04:19.858020 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wqwdk"] Oct 14 08:04:19 crc kubenswrapper[5058]: E1014 08:04:19.858855 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3322ac8f-e633-4400-992a-f1f762b6f65c" containerName="collect-profiles" Oct 14 08:04:19 crc kubenswrapper[5058]: I1014 08:04:19.858897 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3322ac8f-e633-4400-992a-f1f762b6f65c" containerName="collect-profiles" Oct 14 08:04:19 crc kubenswrapper[5058]: I1014 08:04:19.859160 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3322ac8f-e633-4400-992a-f1f762b6f65c" containerName="collect-profiles" Oct 14 08:04:19 crc kubenswrapper[5058]: I1014 08:04:19.860442 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:19 crc kubenswrapper[5058]: I1014 08:04:19.873407 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqwdk"] Oct 14 08:04:19 crc kubenswrapper[5058]: I1014 08:04:19.970048 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-catalog-content\") pod \"redhat-operators-wqwdk\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:19 crc kubenswrapper[5058]: I1014 08:04:19.970156 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8fc\" (UniqueName: \"kubernetes.io/projected/cb80f028-54df-4425-a449-1714e08cb0c9-kube-api-access-dl8fc\") pod \"redhat-operators-wqwdk\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:19 crc kubenswrapper[5058]: I1014 08:04:19.970204 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-utilities\") pod \"redhat-operators-wqwdk\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:20 crc kubenswrapper[5058]: I1014 08:04:20.071921 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-catalog-content\") pod \"redhat-operators-wqwdk\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:20 crc kubenswrapper[5058]: I1014 08:04:20.072031 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl8fc\" (UniqueName: \"kubernetes.io/projected/cb80f028-54df-4425-a449-1714e08cb0c9-kube-api-access-dl8fc\") pod \"redhat-operators-wqwdk\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:20 crc kubenswrapper[5058]: I1014 08:04:20.072070 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-utilities\") pod \"redhat-operators-wqwdk\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:20 crc kubenswrapper[5058]: I1014 08:04:20.072441 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-catalog-content\") pod \"redhat-operators-wqwdk\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:20 crc kubenswrapper[5058]: I1014 08:04:20.072670 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-utilities\") pod \"redhat-operators-wqwdk\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:20 crc kubenswrapper[5058]: I1014 08:04:20.139544 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl8fc\" (UniqueName: \"kubernetes.io/projected/cb80f028-54df-4425-a449-1714e08cb0c9-kube-api-access-dl8fc\") pod \"redhat-operators-wqwdk\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:20 crc kubenswrapper[5058]: I1014 08:04:20.208551 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:20 crc kubenswrapper[5058]: I1014 08:04:20.623632 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqwdk"] Oct 14 08:04:21 crc kubenswrapper[5058]: I1014 08:04:21.558593 5058 generic.go:334] "Generic (PLEG): container finished" podID="cb80f028-54df-4425-a449-1714e08cb0c9" containerID="856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163" exitCode=0 Oct 14 08:04:21 crc kubenswrapper[5058]: I1014 08:04:21.558660 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqwdk" event={"ID":"cb80f028-54df-4425-a449-1714e08cb0c9","Type":"ContainerDied","Data":"856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163"} Oct 14 08:04:21 crc kubenswrapper[5058]: I1014 08:04:21.558737 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqwdk" event={"ID":"cb80f028-54df-4425-a449-1714e08cb0c9","Type":"ContainerStarted","Data":"28cc3caaa5c06c7a29ae7c3390c898feff800e9e229a15ba9d83660b6ed81e50"} Oct 14 08:04:21 crc kubenswrapper[5058]: I1014 08:04:21.562067 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 08:04:22 crc kubenswrapper[5058]: I1014 08:04:22.566000 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqwdk" event={"ID":"cb80f028-54df-4425-a449-1714e08cb0c9","Type":"ContainerStarted","Data":"4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae"} Oct 14 08:04:23 crc kubenswrapper[5058]: I1014 08:04:23.573988 5058 generic.go:334] "Generic (PLEG): container finished" podID="cb80f028-54df-4425-a449-1714e08cb0c9" containerID="4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae" exitCode=0 Oct 14 08:04:23 crc kubenswrapper[5058]: I1014 08:04:23.574143 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqwdk" event={"ID":"cb80f028-54df-4425-a449-1714e08cb0c9","Type":"ContainerDied","Data":"4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae"} Oct 14 08:04:24 crc kubenswrapper[5058]: I1014 08:04:24.588989 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqwdk" event={"ID":"cb80f028-54df-4425-a449-1714e08cb0c9","Type":"ContainerStarted","Data":"551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be"} Oct 14 08:04:24 crc kubenswrapper[5058]: I1014 08:04:24.621345 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wqwdk" podStartSLOduration=3.121401895 podStartE2EDuration="5.621322538s" podCreationTimestamp="2025-10-14 08:04:19 +0000 UTC" firstStartedPulling="2025-10-14 08:04:21.561521325 +0000 UTC m=+4609.472605171" lastFinishedPulling="2025-10-14 08:04:24.061442008 +0000 UTC m=+4611.972525814" observedRunningTime="2025-10-14 08:04:24.611714687 +0000 UTC m=+4612.522798583" watchObservedRunningTime="2025-10-14 08:04:24.621322538 +0000 UTC m=+4612.532406384" Oct 14 08:04:25 crc kubenswrapper[5058]: I1014 08:04:25.790618 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:04:25 crc kubenswrapper[5058]: E1014 08:04:25.791147 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:04:30 crc kubenswrapper[5058]: I1014 08:04:30.209442 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:30 crc kubenswrapper[5058]: I1014 08:04:30.209752 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:30 crc kubenswrapper[5058]: I1014 08:04:30.281922 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:30 crc kubenswrapper[5058]: I1014 08:04:30.699680 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:30 crc kubenswrapper[5058]: I1014 08:04:30.759911 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wqwdk"] Oct 14 08:04:32 crc kubenswrapper[5058]: I1014 08:04:32.665378 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wqwdk" podUID="cb80f028-54df-4425-a449-1714e08cb0c9" containerName="registry-server" containerID="cri-o://551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be" gracePeriod=2 Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.085089 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.160932 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl8fc\" (UniqueName: \"kubernetes.io/projected/cb80f028-54df-4425-a449-1714e08cb0c9-kube-api-access-dl8fc\") pod \"cb80f028-54df-4425-a449-1714e08cb0c9\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.161043 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-utilities\") pod \"cb80f028-54df-4425-a449-1714e08cb0c9\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.161093 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-catalog-content\") pod \"cb80f028-54df-4425-a449-1714e08cb0c9\" (UID: \"cb80f028-54df-4425-a449-1714e08cb0c9\") " Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.161933 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-utilities" (OuterVolumeSpecName: "utilities") pod "cb80f028-54df-4425-a449-1714e08cb0c9" (UID: "cb80f028-54df-4425-a449-1714e08cb0c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.165902 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb80f028-54df-4425-a449-1714e08cb0c9-kube-api-access-dl8fc" (OuterVolumeSpecName: "kube-api-access-dl8fc") pod "cb80f028-54df-4425-a449-1714e08cb0c9" (UID: "cb80f028-54df-4425-a449-1714e08cb0c9"). InnerVolumeSpecName "kube-api-access-dl8fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.263315 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl8fc\" (UniqueName: \"kubernetes.io/projected/cb80f028-54df-4425-a449-1714e08cb0c9-kube-api-access-dl8fc\") on node \"crc\" DevicePath \"\"" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.263376 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.677107 5058 generic.go:334] "Generic (PLEG): container finished" podID="cb80f028-54df-4425-a449-1714e08cb0c9" containerID="551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be" exitCode=0 Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.677198 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqwdk" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.677255 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqwdk" event={"ID":"cb80f028-54df-4425-a449-1714e08cb0c9","Type":"ContainerDied","Data":"551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be"} Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.677352 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqwdk" event={"ID":"cb80f028-54df-4425-a449-1714e08cb0c9","Type":"ContainerDied","Data":"28cc3caaa5c06c7a29ae7c3390c898feff800e9e229a15ba9d83660b6ed81e50"} Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.677387 5058 scope.go:117] "RemoveContainer" containerID="551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.707427 5058 scope.go:117] "RemoveContainer" containerID="4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.835005 5058 scope.go:117] "RemoveContainer" containerID="856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.864882 5058 scope.go:117] "RemoveContainer" containerID="551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be" Oct 14 08:04:33 crc kubenswrapper[5058]: E1014 08:04:33.865284 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be\": container with ID starting with 551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be not found: ID does not exist" containerID="551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.865343 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be"} err="failed to get container status \"551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be\": rpc error: code = NotFound desc = could not find container \"551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be\": container with ID starting with 551117d275ec38d4dde8960fbcdd88323f083481cd5bae6709753dbb207fb9be not found: ID does not exist" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.865380 5058 scope.go:117] "RemoveContainer" containerID="4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae" Oct 14 08:04:33 crc kubenswrapper[5058]: E1014 08:04:33.865684 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae\": container with ID starting with 4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae not found: ID does not exist" containerID="4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.865720 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae"} err="failed to get container status \"4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae\": rpc error: code = NotFound desc = could not find container \"4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae\": container with ID starting with 4736fe2003ad05bbb5df9e234f1c8a72aafc6beeb3ca577bd8c4574889c8cdae not found: ID does not exist" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.865744 5058 scope.go:117] "RemoveContainer" containerID="856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163" Oct 14 08:04:33 crc kubenswrapper[5058]: E1014 08:04:33.866122 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163\": container with ID starting with 856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163 not found: ID does not exist" containerID="856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163" Oct 14 08:04:33 crc kubenswrapper[5058]: I1014 08:04:33.866152 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163"} err="failed to get container status \"856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163\": rpc error: code = NotFound desc = could not find container \"856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163\": container with ID starting with 856c3b4c93583d9dbff37c32ea9089f0ca3a4c4649a72e8666a826c79297e163 not found: ID does not exist" Oct 14 08:04:34 crc kubenswrapper[5058]: I1014 08:04:34.092521 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb80f028-54df-4425-a449-1714e08cb0c9" (UID: "cb80f028-54df-4425-a449-1714e08cb0c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:04:34 crc kubenswrapper[5058]: I1014 08:04:34.179930 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb80f028-54df-4425-a449-1714e08cb0c9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:04:34 crc kubenswrapper[5058]: I1014 08:04:34.328993 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wqwdk"] Oct 14 08:04:34 crc kubenswrapper[5058]: I1014 08:04:34.337070 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wqwdk"] Oct 14 08:04:34 crc kubenswrapper[5058]: I1014 08:04:34.803893 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb80f028-54df-4425-a449-1714e08cb0c9" path="/var/lib/kubelet/pods/cb80f028-54df-4425-a449-1714e08cb0c9/volumes" Oct 14 08:04:40 crc kubenswrapper[5058]: I1014 08:04:40.790725 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:04:41 crc kubenswrapper[5058]: I1014 08:04:41.744461 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"fe67a4398a3ea30f3032a863ed5c4acedc195b8524a4edcbc1f714d4494ae702"} Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.677121 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t7z6q"] Oct 14 08:06:30 crc kubenswrapper[5058]: E1014 08:06:30.679285 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb80f028-54df-4425-a449-1714e08cb0c9" containerName="registry-server" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.679460 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb80f028-54df-4425-a449-1714e08cb0c9" containerName="registry-server" Oct 14 08:06:30 crc kubenswrapper[5058]: E1014 08:06:30.679527 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb80f028-54df-4425-a449-1714e08cb0c9" containerName="extract-content" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.679585 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb80f028-54df-4425-a449-1714e08cb0c9" containerName="extract-content" Oct 14 08:06:30 crc kubenswrapper[5058]: E1014 08:06:30.679655 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb80f028-54df-4425-a449-1714e08cb0c9" containerName="extract-utilities" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.679721 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb80f028-54df-4425-a449-1714e08cb0c9" containerName="extract-utilities" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.679942 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb80f028-54df-4425-a449-1714e08cb0c9" containerName="registry-server" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.681038 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.692937 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7z6q"] Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.746160 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-catalog-content\") pod \"certified-operators-t7z6q\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.746294 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-utilities\") pod \"certified-operators-t7z6q\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.746418 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp7vj\" (UniqueName: \"kubernetes.io/projected/808e25c5-92f3-4a08-88e7-1239f331f55c-kube-api-access-bp7vj\") pod \"certified-operators-t7z6q\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.847314 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp7vj\" (UniqueName: \"kubernetes.io/projected/808e25c5-92f3-4a08-88e7-1239f331f55c-kube-api-access-bp7vj\") pod \"certified-operators-t7z6q\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.847411 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-catalog-content\") pod \"certified-operators-t7z6q\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.847495 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-utilities\") pod \"certified-operators-t7z6q\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.848124 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-utilities\") pod \"certified-operators-t7z6q\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.848355 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-catalog-content\") pod \"certified-operators-t7z6q\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:30 crc kubenswrapper[5058]: I1014 08:06:30.875350 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp7vj\" (UniqueName: \"kubernetes.io/projected/808e25c5-92f3-4a08-88e7-1239f331f55c-kube-api-access-bp7vj\") pod \"certified-operators-t7z6q\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:31 crc kubenswrapper[5058]: I1014 08:06:31.047105 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:31 crc kubenswrapper[5058]: I1014 08:06:31.307859 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7z6q"] Oct 14 08:06:31 crc kubenswrapper[5058]: W1014 08:06:31.317939 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod808e25c5_92f3_4a08_88e7_1239f331f55c.slice/crio-b55f2b671062aea855d50040c49d40297e1fbb9f01a7b4e4deef30b16eb7edfa WatchSource:0}: Error finding container b55f2b671062aea855d50040c49d40297e1fbb9f01a7b4e4deef30b16eb7edfa: Status 404 returned error can't find the container with id b55f2b671062aea855d50040c49d40297e1fbb9f01a7b4e4deef30b16eb7edfa Oct 14 08:06:31 crc kubenswrapper[5058]: I1014 08:06:31.737582 5058 generic.go:334] "Generic (PLEG): container finished" podID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerID="df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798" exitCode=0 Oct 14 08:06:31 crc kubenswrapper[5058]: I1014 08:06:31.737620 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7z6q" event={"ID":"808e25c5-92f3-4a08-88e7-1239f331f55c","Type":"ContainerDied","Data":"df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798"} Oct 14 08:06:31 crc kubenswrapper[5058]: I1014 08:06:31.737659 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7z6q" event={"ID":"808e25c5-92f3-4a08-88e7-1239f331f55c","Type":"ContainerStarted","Data":"b55f2b671062aea855d50040c49d40297e1fbb9f01a7b4e4deef30b16eb7edfa"} Oct 14 08:06:33 crc kubenswrapper[5058]: I1014 08:06:33.766776 5058 generic.go:334] "Generic (PLEG): container finished" podID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerID="807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f" exitCode=0 Oct 14 08:06:33 crc kubenswrapper[5058]: I1014 08:06:33.766941 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7z6q" event={"ID":"808e25c5-92f3-4a08-88e7-1239f331f55c","Type":"ContainerDied","Data":"807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f"} Oct 14 08:06:34 crc kubenswrapper[5058]: I1014 08:06:34.775106 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7z6q" event={"ID":"808e25c5-92f3-4a08-88e7-1239f331f55c","Type":"ContainerStarted","Data":"15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f"} Oct 14 08:06:34 crc kubenswrapper[5058]: I1014 08:06:34.794467 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t7z6q" podStartSLOduration=2.111624442 podStartE2EDuration="4.794448087s" podCreationTimestamp="2025-10-14 08:06:30 +0000 UTC" firstStartedPulling="2025-10-14 08:06:31.739293752 +0000 UTC m=+4739.650377558" lastFinishedPulling="2025-10-14 08:06:34.422117397 +0000 UTC m=+4742.333201203" observedRunningTime="2025-10-14 08:06:34.790634859 +0000 UTC m=+4742.701718665" watchObservedRunningTime="2025-10-14 08:06:34.794448087 +0000 UTC m=+4742.705531893" Oct 14 08:06:41 crc kubenswrapper[5058]: I1014 08:06:41.048293 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:41 crc kubenswrapper[5058]: I1014 08:06:41.049193 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:41 crc kubenswrapper[5058]: I1014 08:06:41.108354 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:41 crc kubenswrapper[5058]: I1014 08:06:41.891412 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:43 crc kubenswrapper[5058]: I1014 08:06:43.076668 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7z6q"] Oct 14 08:06:43 crc kubenswrapper[5058]: I1014 08:06:43.858857 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t7z6q" podUID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerName="registry-server" containerID="cri-o://15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f" gracePeriod=2 Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.288358 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.354181 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp7vj\" (UniqueName: \"kubernetes.io/projected/808e25c5-92f3-4a08-88e7-1239f331f55c-kube-api-access-bp7vj\") pod \"808e25c5-92f3-4a08-88e7-1239f331f55c\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.354252 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-utilities\") pod \"808e25c5-92f3-4a08-88e7-1239f331f55c\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.354291 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-catalog-content\") pod \"808e25c5-92f3-4a08-88e7-1239f331f55c\" (UID: \"808e25c5-92f3-4a08-88e7-1239f331f55c\") " Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.360266 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-utilities" (OuterVolumeSpecName: "utilities") pod "808e25c5-92f3-4a08-88e7-1239f331f55c" (UID: "808e25c5-92f3-4a08-88e7-1239f331f55c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.374121 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808e25c5-92f3-4a08-88e7-1239f331f55c-kube-api-access-bp7vj" (OuterVolumeSpecName: "kube-api-access-bp7vj") pod "808e25c5-92f3-4a08-88e7-1239f331f55c" (UID: "808e25c5-92f3-4a08-88e7-1239f331f55c"). InnerVolumeSpecName "kube-api-access-bp7vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.426531 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "808e25c5-92f3-4a08-88e7-1239f331f55c" (UID: "808e25c5-92f3-4a08-88e7-1239f331f55c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.456505 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp7vj\" (UniqueName: \"kubernetes.io/projected/808e25c5-92f3-4a08-88e7-1239f331f55c-kube-api-access-bp7vj\") on node \"crc\" DevicePath \"\"" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.456769 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.456906 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808e25c5-92f3-4a08-88e7-1239f331f55c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.877307 5058 generic.go:334] "Generic (PLEG): container finished" podID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerID="15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f" exitCode=0 Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.877377 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7z6q" event={"ID":"808e25c5-92f3-4a08-88e7-1239f331f55c","Type":"ContainerDied","Data":"15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f"} Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.877412 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7z6q" event={"ID":"808e25c5-92f3-4a08-88e7-1239f331f55c","Type":"ContainerDied","Data":"b55f2b671062aea855d50040c49d40297e1fbb9f01a7b4e4deef30b16eb7edfa"} Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.877429 5058 scope.go:117] "RemoveContainer" containerID="15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.877475 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7z6q" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.904090 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7z6q"] Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.905853 5058 scope.go:117] "RemoveContainer" containerID="807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.911635 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t7z6q"] Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.923394 5058 scope.go:117] "RemoveContainer" containerID="df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.992053 5058 scope.go:117] "RemoveContainer" containerID="15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f" Oct 14 08:06:44 crc kubenswrapper[5058]: E1014 08:06:44.992602 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f\": container with ID starting with 15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f not found: ID does not exist" containerID="15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.992670 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f"} err="failed to get container status \"15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f\": rpc error: code = NotFound desc = could not find container \"15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f\": container with ID starting with 15d7ef5d5ec018275784b258b97be200b1acab503f91320bf0964a30e89d958f not found: ID does not exist" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.992699 5058 scope.go:117] "RemoveContainer" containerID="807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f" Oct 14 08:06:44 crc kubenswrapper[5058]: E1014 08:06:44.993097 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f\": container with ID starting with 807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f not found: ID does not exist" containerID="807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.993128 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f"} err="failed to get container status \"807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f\": rpc error: code = NotFound desc = could not find container \"807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f\": container with ID starting with 807d88acb33d74be370489441977744ba836155eee07e71470b121363a53e49f not found: ID does not exist" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.993152 5058 scope.go:117] "RemoveContainer" containerID="df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798" Oct 14 08:06:44 crc kubenswrapper[5058]: E1014 08:06:44.993488 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798\": container with ID starting with df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798 not found: ID does not exist" containerID="df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798" Oct 14 08:06:44 crc kubenswrapper[5058]: I1014 08:06:44.993526 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798"} err="failed to get container status \"df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798\": rpc error: code = NotFound desc = could not find container \"df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798\": container with ID starting with df5382f748280b75a5220aab78c12129b7bf4aebc0e047a213e78b524dc0d798 not found: ID does not exist" Oct 14 08:06:46 crc kubenswrapper[5058]: I1014 08:06:46.798023 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808e25c5-92f3-4a08-88e7-1239f331f55c" path="/var/lib/kubelet/pods/808e25c5-92f3-4a08-88e7-1239f331f55c/volumes" Oct 14 08:07:03 crc kubenswrapper[5058]: I1014 08:07:03.656992 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:07:03 crc kubenswrapper[5058]: I1014 08:07:03.657613 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:07:33 crc kubenswrapper[5058]: I1014 08:07:33.656817 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:07:33 crc kubenswrapper[5058]: I1014 08:07:33.657537 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:08:03 crc kubenswrapper[5058]: I1014 08:08:03.655811 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:08:03 crc kubenswrapper[5058]: I1014 08:08:03.656357 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:08:03 crc kubenswrapper[5058]: I1014 08:08:03.656399 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:08:03 crc kubenswrapper[5058]: I1014 08:08:03.657052 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe67a4398a3ea30f3032a863ed5c4acedc195b8524a4edcbc1f714d4494ae702"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:08:03 crc kubenswrapper[5058]: I1014 08:08:03.657106 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://fe67a4398a3ea30f3032a863ed5c4acedc195b8524a4edcbc1f714d4494ae702" gracePeriod=600 Oct 14 08:08:04 crc kubenswrapper[5058]: I1014 08:08:04.612293 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="fe67a4398a3ea30f3032a863ed5c4acedc195b8524a4edcbc1f714d4494ae702" exitCode=0 Oct 14 08:08:04 crc kubenswrapper[5058]: I1014 08:08:04.612454 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"fe67a4398a3ea30f3032a863ed5c4acedc195b8524a4edcbc1f714d4494ae702"} Oct 14 08:08:04 crc kubenswrapper[5058]: I1014 08:08:04.612627 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc"} Oct 14 08:08:04 crc kubenswrapper[5058]: I1014 08:08:04.612650 5058 scope.go:117] "RemoveContainer" containerID="d03a5a47c00eb10ff3b9e5a2b5380f23e232fb4cc4f3d33274edd257ad81db4b" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.821617 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4qqkj"] Oct 14 08:08:54 crc kubenswrapper[5058]: E1014 08:08:54.823062 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerName="registry-server" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.823097 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerName="registry-server" Oct 14 08:08:54 crc kubenswrapper[5058]: E1014 08:08:54.823128 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerName="extract-content" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.823145 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerName="extract-content" Oct 14 08:08:54 crc kubenswrapper[5058]: E1014 08:08:54.823205 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerName="extract-utilities" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.823227 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerName="extract-utilities" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.823580 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="808e25c5-92f3-4a08-88e7-1239f331f55c" containerName="registry-server" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.834893 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4qqkj"] Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.835058 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.865819 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-utilities\") pod \"community-operators-4qqkj\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.865904 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-catalog-content\") pod \"community-operators-4qqkj\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.865932 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlln7\" (UniqueName: \"kubernetes.io/projected/df1704b1-85ea-4dae-849d-d68250cc806c-kube-api-access-mlln7\") pod \"community-operators-4qqkj\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.967066 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-utilities\") pod \"community-operators-4qqkj\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.967438 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-catalog-content\") pod \"community-operators-4qqkj\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.967476 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlln7\" (UniqueName: \"kubernetes.io/projected/df1704b1-85ea-4dae-849d-d68250cc806c-kube-api-access-mlln7\") pod \"community-operators-4qqkj\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.967846 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-utilities\") pod \"community-operators-4qqkj\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.968071 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-catalog-content\") pod \"community-operators-4qqkj\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:54 crc kubenswrapper[5058]: I1014 08:08:54.997182 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlln7\" (UniqueName: \"kubernetes.io/projected/df1704b1-85ea-4dae-849d-d68250cc806c-kube-api-access-mlln7\") pod \"community-operators-4qqkj\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:55 crc kubenswrapper[5058]: I1014 08:08:55.189265 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:08:55 crc kubenswrapper[5058]: I1014 08:08:55.714317 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4qqkj"] Oct 14 08:08:56 crc kubenswrapper[5058]: I1014 08:08:56.093499 5058 generic.go:334] "Generic (PLEG): container finished" podID="df1704b1-85ea-4dae-849d-d68250cc806c" containerID="4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950" exitCode=0 Oct 14 08:08:56 crc kubenswrapper[5058]: I1014 08:08:56.093566 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qqkj" event={"ID":"df1704b1-85ea-4dae-849d-d68250cc806c","Type":"ContainerDied","Data":"4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950"} Oct 14 08:08:56 crc kubenswrapper[5058]: I1014 08:08:56.093936 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qqkj" event={"ID":"df1704b1-85ea-4dae-849d-d68250cc806c","Type":"ContainerStarted","Data":"09ee22bce49a7cdae1e0658a8f0219a22c15209d691ccc69df07a2cb392db66d"} Oct 14 08:08:58 crc kubenswrapper[5058]: I1014 08:08:58.116961 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qqkj" event={"ID":"df1704b1-85ea-4dae-849d-d68250cc806c","Type":"ContainerStarted","Data":"946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6"} Oct 14 08:08:59 crc kubenswrapper[5058]: I1014 08:08:59.128502 5058 generic.go:334] "Generic (PLEG): container finished" podID="df1704b1-85ea-4dae-849d-d68250cc806c" containerID="946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6" exitCode=0 Oct 14 08:08:59 crc kubenswrapper[5058]: I1014 08:08:59.128753 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qqkj" event={"ID":"df1704b1-85ea-4dae-849d-d68250cc806c","Type":"ContainerDied","Data":"946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6"} Oct 14 08:09:00 crc kubenswrapper[5058]: I1014 08:09:00.139177 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qqkj" event={"ID":"df1704b1-85ea-4dae-849d-d68250cc806c","Type":"ContainerStarted","Data":"49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05"} Oct 14 08:09:00 crc kubenswrapper[5058]: I1014 08:09:00.156356 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4qqkj" podStartSLOduration=2.578207465 podStartE2EDuration="6.156338422s" podCreationTimestamp="2025-10-14 08:08:54 +0000 UTC" firstStartedPulling="2025-10-14 08:08:56.096603374 +0000 UTC m=+4884.007687210" lastFinishedPulling="2025-10-14 08:08:59.674734321 +0000 UTC m=+4887.585818167" observedRunningTime="2025-10-14 08:09:00.154278264 +0000 UTC m=+4888.065362070" watchObservedRunningTime="2025-10-14 08:09:00.156338422 +0000 UTC m=+4888.067422228" Oct 14 08:09:05 crc kubenswrapper[5058]: I1014 08:09:05.189862 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:09:05 crc kubenswrapper[5058]: I1014 08:09:05.191670 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:09:05 crc kubenswrapper[5058]: I1014 08:09:05.277714 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:09:06 crc kubenswrapper[5058]: I1014 08:09:06.249408 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:09:06 crc kubenswrapper[5058]: I1014 08:09:06.295413 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4qqkj"] Oct 14 08:09:08 crc kubenswrapper[5058]: I1014 08:09:08.206201 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4qqkj" podUID="df1704b1-85ea-4dae-849d-d68250cc806c" containerName="registry-server" containerID="cri-o://49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05" gracePeriod=2 Oct 14 08:09:08 crc kubenswrapper[5058]: I1014 08:09:08.684037 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:09:08 crc kubenswrapper[5058]: I1014 08:09:08.879654 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-catalog-content\") pod \"df1704b1-85ea-4dae-849d-d68250cc806c\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " Oct 14 08:09:08 crc kubenswrapper[5058]: I1014 08:09:08.880404 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlln7\" (UniqueName: \"kubernetes.io/projected/df1704b1-85ea-4dae-849d-d68250cc806c-kube-api-access-mlln7\") pod \"df1704b1-85ea-4dae-849d-d68250cc806c\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " Oct 14 08:09:08 crc kubenswrapper[5058]: I1014 08:09:08.880513 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-utilities\") pod \"df1704b1-85ea-4dae-849d-d68250cc806c\" (UID: \"df1704b1-85ea-4dae-849d-d68250cc806c\") " Oct 14 08:09:08 crc kubenswrapper[5058]: I1014 08:09:08.881394 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-utilities" (OuterVolumeSpecName: "utilities") pod "df1704b1-85ea-4dae-849d-d68250cc806c" (UID: "df1704b1-85ea-4dae-849d-d68250cc806c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:09:08 crc kubenswrapper[5058]: I1014 08:09:08.887561 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1704b1-85ea-4dae-849d-d68250cc806c-kube-api-access-mlln7" (OuterVolumeSpecName: "kube-api-access-mlln7") pod "df1704b1-85ea-4dae-849d-d68250cc806c" (UID: "df1704b1-85ea-4dae-849d-d68250cc806c"). InnerVolumeSpecName "kube-api-access-mlln7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:09:08 crc kubenswrapper[5058]: I1014 08:09:08.981990 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:09:08 crc kubenswrapper[5058]: I1014 08:09:08.982023 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlln7\" (UniqueName: \"kubernetes.io/projected/df1704b1-85ea-4dae-849d-d68250cc806c-kube-api-access-mlln7\") on node \"crc\" DevicePath \"\"" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.006106 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df1704b1-85ea-4dae-849d-d68250cc806c" (UID: "df1704b1-85ea-4dae-849d-d68250cc806c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.082813 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1704b1-85ea-4dae-849d-d68250cc806c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.217549 5058 generic.go:334] "Generic (PLEG): container finished" podID="df1704b1-85ea-4dae-849d-d68250cc806c" containerID="49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05" exitCode=0 Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.217602 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qqkj" event={"ID":"df1704b1-85ea-4dae-849d-d68250cc806c","Type":"ContainerDied","Data":"49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05"} Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.217642 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qqkj" event={"ID":"df1704b1-85ea-4dae-849d-d68250cc806c","Type":"ContainerDied","Data":"09ee22bce49a7cdae1e0658a8f0219a22c15209d691ccc69df07a2cb392db66d"} Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.217652 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qqkj" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.217667 5058 scope.go:117] "RemoveContainer" containerID="49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.238901 5058 scope.go:117] "RemoveContainer" containerID="946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.265347 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4qqkj"] Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.271388 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4qqkj"] Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.283782 5058 scope.go:117] "RemoveContainer" containerID="4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.302996 5058 scope.go:117] "RemoveContainer" containerID="49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05" Oct 14 08:09:09 crc kubenswrapper[5058]: E1014 08:09:09.303470 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05\": container with ID starting with 49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05 not found: ID does not exist" containerID="49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.303510 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05"} err="failed to get container status \"49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05\": rpc error: code = NotFound desc = could not find container \"49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05\": container with ID starting with 49644b8ff27dec384fc0ecde3769575d1ea49f163d99682233b729188bcb2b05 not found: ID does not exist" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.303548 5058 scope.go:117] "RemoveContainer" containerID="946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6" Oct 14 08:09:09 crc kubenswrapper[5058]: E1014 08:09:09.303836 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6\": container with ID starting with 946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6 not found: ID does not exist" containerID="946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.303878 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6"} err="failed to get container status \"946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6\": rpc error: code = NotFound desc = could not find container \"946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6\": container with ID starting with 946e9add926abcd68e5545775b456a411ece6beacf44bc6aaa49320984181fb6 not found: ID does not exist" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.303893 5058 scope.go:117] "RemoveContainer" containerID="4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950" Oct 14 08:09:09 crc kubenswrapper[5058]: E1014 08:09:09.304167 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950\": container with ID starting with 4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950 not found: ID does not exist" containerID="4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950" Oct 14 08:09:09 crc kubenswrapper[5058]: I1014 08:09:09.304208 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950"} err="failed to get container status \"4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950\": rpc error: code = NotFound desc = could not find container \"4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950\": container with ID starting with 4c5955d6ee9ff4ecb41dac20fa925f3f1be1978e17b0f6be5e6a290cbe3bb950 not found: ID does not exist" Oct 14 08:09:10 crc kubenswrapper[5058]: I1014 08:09:10.804488 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1704b1-85ea-4dae-849d-d68250cc806c" path="/var/lib/kubelet/pods/df1704b1-85ea-4dae-849d-d68250cc806c/volumes" Oct 14 08:10:03 crc kubenswrapper[5058]: I1014 08:10:03.656276 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:10:03 crc kubenswrapper[5058]: I1014 08:10:03.657025 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:10:33 crc kubenswrapper[5058]: I1014 08:10:33.655919 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:10:33 crc kubenswrapper[5058]: I1014 08:10:33.656553 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.370637 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-trn9p"] Oct 14 08:10:46 crc kubenswrapper[5058]: E1014 08:10:46.371481 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1704b1-85ea-4dae-849d-d68250cc806c" containerName="extract-utilities" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.371497 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1704b1-85ea-4dae-849d-d68250cc806c" containerName="extract-utilities" Oct 14 08:10:46 crc kubenswrapper[5058]: E1014 08:10:46.371517 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1704b1-85ea-4dae-849d-d68250cc806c" containerName="registry-server" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.371525 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1704b1-85ea-4dae-849d-d68250cc806c" containerName="registry-server" Oct 14 08:10:46 crc kubenswrapper[5058]: E1014 08:10:46.371548 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1704b1-85ea-4dae-849d-d68250cc806c" containerName="extract-content" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.371557 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1704b1-85ea-4dae-849d-d68250cc806c" containerName="extract-content" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.371724 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1704b1-85ea-4dae-849d-d68250cc806c" containerName="registry-server" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.372921 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.380740 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trn9p"] Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.432900 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-catalog-content\") pod \"redhat-marketplace-trn9p\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.433013 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-utilities\") pod \"redhat-marketplace-trn9p\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.433049 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gfps\" (UniqueName: \"kubernetes.io/projected/1775c250-7ad2-4655-8abc-076ec7d480bc-kube-api-access-6gfps\") pod \"redhat-marketplace-trn9p\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.534875 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-catalog-content\") pod \"redhat-marketplace-trn9p\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.535049 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-utilities\") pod \"redhat-marketplace-trn9p\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.535095 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gfps\" (UniqueName: \"kubernetes.io/projected/1775c250-7ad2-4655-8abc-076ec7d480bc-kube-api-access-6gfps\") pod \"redhat-marketplace-trn9p\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.535856 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-catalog-content\") pod \"redhat-marketplace-trn9p\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.536022 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-utilities\") pod \"redhat-marketplace-trn9p\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.568557 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gfps\" (UniqueName: \"kubernetes.io/projected/1775c250-7ad2-4655-8abc-076ec7d480bc-kube-api-access-6gfps\") pod \"redhat-marketplace-trn9p\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:46 crc kubenswrapper[5058]: I1014 08:10:46.698375 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:47 crc kubenswrapper[5058]: I1014 08:10:47.155391 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trn9p"] Oct 14 08:10:48 crc kubenswrapper[5058]: I1014 08:10:48.111043 5058 generic.go:334] "Generic (PLEG): container finished" podID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerID="caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63" exitCode=0 Oct 14 08:10:48 crc kubenswrapper[5058]: I1014 08:10:48.111172 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trn9p" event={"ID":"1775c250-7ad2-4655-8abc-076ec7d480bc","Type":"ContainerDied","Data":"caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63"} Oct 14 08:10:48 crc kubenswrapper[5058]: I1014 08:10:48.111395 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trn9p" event={"ID":"1775c250-7ad2-4655-8abc-076ec7d480bc","Type":"ContainerStarted","Data":"8f007cadbe2eae0867b9a36c78cc076d7c507314f46e582fdffeee64db166750"} Oct 14 08:10:48 crc kubenswrapper[5058]: I1014 08:10:48.114080 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 08:10:49 crc kubenswrapper[5058]: I1014 08:10:49.122821 5058 generic.go:334] "Generic (PLEG): container finished" podID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerID="ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3" exitCode=0 Oct 14 08:10:49 crc kubenswrapper[5058]: I1014 08:10:49.122905 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trn9p" event={"ID":"1775c250-7ad2-4655-8abc-076ec7d480bc","Type":"ContainerDied","Data":"ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3"} Oct 14 08:10:50 crc kubenswrapper[5058]: I1014 08:10:50.158052 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trn9p" event={"ID":"1775c250-7ad2-4655-8abc-076ec7d480bc","Type":"ContainerStarted","Data":"41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9"} Oct 14 08:10:50 crc kubenswrapper[5058]: I1014 08:10:50.189680 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-trn9p" podStartSLOduration=2.78489825 podStartE2EDuration="4.189146224s" podCreationTimestamp="2025-10-14 08:10:46 +0000 UTC" firstStartedPulling="2025-10-14 08:10:48.113297046 +0000 UTC m=+4996.024380892" lastFinishedPulling="2025-10-14 08:10:49.51754504 +0000 UTC m=+4997.428628866" observedRunningTime="2025-10-14 08:10:50.180687045 +0000 UTC m=+4998.091770881" watchObservedRunningTime="2025-10-14 08:10:50.189146224 +0000 UTC m=+4998.100230040" Oct 14 08:10:56 crc kubenswrapper[5058]: I1014 08:10:56.699459 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:56 crc kubenswrapper[5058]: I1014 08:10:56.700145 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:56 crc kubenswrapper[5058]: I1014 08:10:56.776566 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:57 crc kubenswrapper[5058]: I1014 08:10:57.297844 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:57 crc kubenswrapper[5058]: I1014 08:10:57.370104 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trn9p"] Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.242137 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-trn9p" podUID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerName="registry-server" containerID="cri-o://41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9" gracePeriod=2 Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.653443 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.834829 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-catalog-content\") pod \"1775c250-7ad2-4655-8abc-076ec7d480bc\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.835396 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-utilities\") pod \"1775c250-7ad2-4655-8abc-076ec7d480bc\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.835617 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gfps\" (UniqueName: \"kubernetes.io/projected/1775c250-7ad2-4655-8abc-076ec7d480bc-kube-api-access-6gfps\") pod \"1775c250-7ad2-4655-8abc-076ec7d480bc\" (UID: \"1775c250-7ad2-4655-8abc-076ec7d480bc\") " Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.836623 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-utilities" (OuterVolumeSpecName: "utilities") pod "1775c250-7ad2-4655-8abc-076ec7d480bc" (UID: "1775c250-7ad2-4655-8abc-076ec7d480bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.837179 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.842539 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1775c250-7ad2-4655-8abc-076ec7d480bc-kube-api-access-6gfps" (OuterVolumeSpecName: "kube-api-access-6gfps") pod "1775c250-7ad2-4655-8abc-076ec7d480bc" (UID: "1775c250-7ad2-4655-8abc-076ec7d480bc"). InnerVolumeSpecName "kube-api-access-6gfps". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.850501 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1775c250-7ad2-4655-8abc-076ec7d480bc" (UID: "1775c250-7ad2-4655-8abc-076ec7d480bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.937890 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gfps\" (UniqueName: \"kubernetes.io/projected/1775c250-7ad2-4655-8abc-076ec7d480bc-kube-api-access-6gfps\") on node \"crc\" DevicePath \"\"" Oct 14 08:10:59 crc kubenswrapper[5058]: I1014 08:10:59.937947 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1775c250-7ad2-4655-8abc-076ec7d480bc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.252531 5058 generic.go:334] "Generic (PLEG): container finished" podID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerID="41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9" exitCode=0 Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.252692 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trn9p" event={"ID":"1775c250-7ad2-4655-8abc-076ec7d480bc","Type":"ContainerDied","Data":"41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9"} Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.253058 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trn9p" event={"ID":"1775c250-7ad2-4655-8abc-076ec7d480bc","Type":"ContainerDied","Data":"8f007cadbe2eae0867b9a36c78cc076d7c507314f46e582fdffeee64db166750"} Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.253094 5058 scope.go:117] "RemoveContainer" containerID="41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.252810 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trn9p" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.285032 5058 scope.go:117] "RemoveContainer" containerID="ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.316060 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trn9p"] Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.322201 5058 scope.go:117] "RemoveContainer" containerID="caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.325204 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-trn9p"] Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.349232 5058 scope.go:117] "RemoveContainer" containerID="41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9" Oct 14 08:11:00 crc kubenswrapper[5058]: E1014 08:11:00.349827 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9\": container with ID starting with 41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9 not found: ID does not exist" containerID="41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.349861 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9"} err="failed to get container status \"41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9\": rpc error: code = NotFound desc = could not find container \"41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9\": container with ID starting with 41640f4409e6adcb0d4f09e75134bc937cd5c6f90d5360f10f4fdfab1f107dc9 not found: ID does not exist" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.349882 5058 scope.go:117] "RemoveContainer" containerID="ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3" Oct 14 08:11:00 crc kubenswrapper[5058]: E1014 08:11:00.350242 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3\": container with ID starting with ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3 not found: ID does not exist" containerID="ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.350287 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3"} err="failed to get container status \"ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3\": rpc error: code = NotFound desc = could not find container \"ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3\": container with ID starting with ae556aa3a5a16246fb29468822e7feb60d87e99b5a1c50d5277a7898749de1b3 not found: ID does not exist" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.350313 5058 scope.go:117] "RemoveContainer" containerID="caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63" Oct 14 08:11:00 crc kubenswrapper[5058]: E1014 08:11:00.350646 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63\": container with ID starting with caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63 not found: ID does not exist" containerID="caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.350671 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63"} err="failed to get container status \"caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63\": rpc error: code = NotFound desc = could not find container \"caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63\": container with ID starting with caba41b277a1640d78ff547279fe102718fd33da82e77faac2c66aa14b48ca63 not found: ID does not exist" Oct 14 08:11:00 crc kubenswrapper[5058]: I1014 08:11:00.803425 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1775c250-7ad2-4655-8abc-076ec7d480bc" path="/var/lib/kubelet/pods/1775c250-7ad2-4655-8abc-076ec7d480bc/volumes" Oct 14 08:11:03 crc kubenswrapper[5058]: I1014 08:11:03.656198 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:11:03 crc kubenswrapper[5058]: I1014 08:11:03.656609 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:11:03 crc kubenswrapper[5058]: I1014 08:11:03.656663 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:11:03 crc kubenswrapper[5058]: I1014 08:11:03.657386 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:11:03 crc kubenswrapper[5058]: I1014 08:11:03.657457 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" gracePeriod=600 Oct 14 08:11:03 crc kubenswrapper[5058]: E1014 08:11:03.784591 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:11:04 crc kubenswrapper[5058]: I1014 08:11:04.293380 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" exitCode=0 Oct 14 08:11:04 crc kubenswrapper[5058]: I1014 08:11:04.293479 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc"} Oct 14 08:11:04 crc kubenswrapper[5058]: I1014 08:11:04.294268 5058 scope.go:117] "RemoveContainer" containerID="fe67a4398a3ea30f3032a863ed5c4acedc195b8524a4edcbc1f714d4494ae702" Oct 14 08:11:04 crc kubenswrapper[5058]: I1014 08:11:04.295265 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:11:04 crc kubenswrapper[5058]: E1014 08:11:04.295724 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:11:15 crc kubenswrapper[5058]: I1014 08:11:15.789948 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:11:15 crc kubenswrapper[5058]: E1014 08:11:15.790497 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:11:26 crc kubenswrapper[5058]: I1014 08:11:26.790209 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:11:26 crc kubenswrapper[5058]: E1014 08:11:26.792645 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:11:37 crc kubenswrapper[5058]: I1014 08:11:37.789875 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:11:37 crc kubenswrapper[5058]: E1014 08:11:37.790530 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:11:52 crc kubenswrapper[5058]: I1014 08:11:52.795153 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:11:52 crc kubenswrapper[5058]: E1014 08:11:52.796135 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:12:07 crc kubenswrapper[5058]: I1014 08:12:07.790130 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:12:07 crc kubenswrapper[5058]: E1014 08:12:07.791100 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:12:18 crc kubenswrapper[5058]: I1014 08:12:18.791190 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:12:18 crc kubenswrapper[5058]: E1014 08:12:18.791883 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:12:30 crc kubenswrapper[5058]: I1014 08:12:30.790637 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:12:30 crc kubenswrapper[5058]: E1014 08:12:30.791733 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:12:45 crc kubenswrapper[5058]: I1014 08:12:45.790135 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:12:45 crc kubenswrapper[5058]: E1014 08:12:45.790909 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:12:56 crc kubenswrapper[5058]: I1014 08:12:56.790636 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:12:56 crc kubenswrapper[5058]: E1014 08:12:56.791477 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:13:08 crc kubenswrapper[5058]: I1014 08:13:08.790881 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:13:08 crc kubenswrapper[5058]: E1014 08:13:08.792268 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:13:22 crc kubenswrapper[5058]: I1014 08:13:22.799526 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:13:22 crc kubenswrapper[5058]: E1014 08:13:22.800910 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:13:33 crc kubenswrapper[5058]: I1014 08:13:33.790438 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:13:33 crc kubenswrapper[5058]: E1014 08:13:33.791564 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:13:44 crc kubenswrapper[5058]: I1014 08:13:44.791579 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:13:44 crc kubenswrapper[5058]: E1014 08:13:44.792645 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:13:58 crc kubenswrapper[5058]: I1014 08:13:58.790051 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:13:58 crc kubenswrapper[5058]: E1014 08:13:58.791567 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:14:09 crc kubenswrapper[5058]: I1014 08:14:09.790170 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:14:09 crc kubenswrapper[5058]: E1014 08:14:09.791182 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:14:22 crc kubenswrapper[5058]: I1014 08:14:22.799385 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:14:22 crc kubenswrapper[5058]: E1014 08:14:22.800366 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:14:35 crc kubenswrapper[5058]: I1014 08:14:35.790934 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:14:35 crc kubenswrapper[5058]: E1014 08:14:35.792403 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:14:46 crc kubenswrapper[5058]: I1014 08:14:46.789775 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:14:46 crc kubenswrapper[5058]: E1014 08:14:46.790628 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:14:58 crc kubenswrapper[5058]: I1014 08:14:58.790939 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:14:58 crc kubenswrapper[5058]: E1014 08:14:58.791649 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.173902 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk"] Oct 14 08:15:00 crc kubenswrapper[5058]: E1014 08:15:00.174503 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerName="extract-utilities" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.174538 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerName="extract-utilities" Oct 14 08:15:00 crc kubenswrapper[5058]: E1014 08:15:00.174579 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerName="registry-server" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.174597 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerName="registry-server" Oct 14 08:15:00 crc kubenswrapper[5058]: E1014 08:15:00.174644 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerName="extract-content" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.174666 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerName="extract-content" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.175065 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="1775c250-7ad2-4655-8abc-076ec7d480bc" containerName="registry-server" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.176237 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.184600 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.186676 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.189438 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk"] Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.249000 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9rcz\" (UniqueName: \"kubernetes.io/projected/d4e56306-4fb5-4641-a8c1-3b9846da69e4-kube-api-access-q9rcz\") pod \"collect-profiles-29340495-9p2dk\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.249097 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4e56306-4fb5-4641-a8c1-3b9846da69e4-secret-volume\") pod \"collect-profiles-29340495-9p2dk\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.249178 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4e56306-4fb5-4641-a8c1-3b9846da69e4-config-volume\") pod \"collect-profiles-29340495-9p2dk\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.350869 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9rcz\" (UniqueName: \"kubernetes.io/projected/d4e56306-4fb5-4641-a8c1-3b9846da69e4-kube-api-access-q9rcz\") pod \"collect-profiles-29340495-9p2dk\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.350994 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4e56306-4fb5-4641-a8c1-3b9846da69e4-secret-volume\") pod \"collect-profiles-29340495-9p2dk\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.351115 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4e56306-4fb5-4641-a8c1-3b9846da69e4-config-volume\") pod \"collect-profiles-29340495-9p2dk\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.353108 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4e56306-4fb5-4641-a8c1-3b9846da69e4-config-volume\") pod \"collect-profiles-29340495-9p2dk\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.359020 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4e56306-4fb5-4641-a8c1-3b9846da69e4-secret-volume\") pod \"collect-profiles-29340495-9p2dk\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.369726 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9rcz\" (UniqueName: \"kubernetes.io/projected/d4e56306-4fb5-4641-a8c1-3b9846da69e4-kube-api-access-q9rcz\") pod \"collect-profiles-29340495-9p2dk\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.505210 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:00 crc kubenswrapper[5058]: I1014 08:15:00.780310 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk"] Oct 14 08:15:01 crc kubenswrapper[5058]: I1014 08:15:01.481775 5058 generic.go:334] "Generic (PLEG): container finished" podID="d4e56306-4fb5-4641-a8c1-3b9846da69e4" containerID="e303ecc5160d1a6833956807f74976e2ebb54d653d5b5ef79d083123cb1950ba" exitCode=0 Oct 14 08:15:01 crc kubenswrapper[5058]: I1014 08:15:01.481892 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" event={"ID":"d4e56306-4fb5-4641-a8c1-3b9846da69e4","Type":"ContainerDied","Data":"e303ecc5160d1a6833956807f74976e2ebb54d653d5b5ef79d083123cb1950ba"} Oct 14 08:15:01 crc kubenswrapper[5058]: I1014 08:15:01.481933 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" event={"ID":"d4e56306-4fb5-4641-a8c1-3b9846da69e4","Type":"ContainerStarted","Data":"f86f7bdff11d7bb6995ce523624f41d9b79fb27ca9b37e5d2df5104270320612"} Oct 14 08:15:02 crc kubenswrapper[5058]: I1014 08:15:02.851286 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:02 crc kubenswrapper[5058]: I1014 08:15:02.998883 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9rcz\" (UniqueName: \"kubernetes.io/projected/d4e56306-4fb5-4641-a8c1-3b9846da69e4-kube-api-access-q9rcz\") pod \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " Oct 14 08:15:02 crc kubenswrapper[5058]: I1014 08:15:02.999016 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4e56306-4fb5-4641-a8c1-3b9846da69e4-secret-volume\") pod \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " Oct 14 08:15:02 crc kubenswrapper[5058]: I1014 08:15:02.999084 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4e56306-4fb5-4641-a8c1-3b9846da69e4-config-volume\") pod \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\" (UID: \"d4e56306-4fb5-4641-a8c1-3b9846da69e4\") " Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.000320 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e56306-4fb5-4641-a8c1-3b9846da69e4-config-volume" (OuterVolumeSpecName: "config-volume") pod "d4e56306-4fb5-4641-a8c1-3b9846da69e4" (UID: "d4e56306-4fb5-4641-a8c1-3b9846da69e4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.005214 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e56306-4fb5-4641-a8c1-3b9846da69e4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d4e56306-4fb5-4641-a8c1-3b9846da69e4" (UID: "d4e56306-4fb5-4641-a8c1-3b9846da69e4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.011126 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e56306-4fb5-4641-a8c1-3b9846da69e4-kube-api-access-q9rcz" (OuterVolumeSpecName: "kube-api-access-q9rcz") pod "d4e56306-4fb5-4641-a8c1-3b9846da69e4" (UID: "d4e56306-4fb5-4641-a8c1-3b9846da69e4"). InnerVolumeSpecName "kube-api-access-q9rcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.100192 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4e56306-4fb5-4641-a8c1-3b9846da69e4-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.100253 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9rcz\" (UniqueName: \"kubernetes.io/projected/d4e56306-4fb5-4641-a8c1-3b9846da69e4-kube-api-access-q9rcz\") on node \"crc\" DevicePath \"\"" Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.100267 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4e56306-4fb5-4641-a8c1-3b9846da69e4-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.509174 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" event={"ID":"d4e56306-4fb5-4641-a8c1-3b9846da69e4","Type":"ContainerDied","Data":"f86f7bdff11d7bb6995ce523624f41d9b79fb27ca9b37e5d2df5104270320612"} Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.509238 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86f7bdff11d7bb6995ce523624f41d9b79fb27ca9b37e5d2df5104270320612" Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.509268 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk" Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.948780 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm"] Oct 14 08:15:03 crc kubenswrapper[5058]: I1014 08:15:03.958743 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340450-2fngm"] Oct 14 08:15:04 crc kubenswrapper[5058]: I1014 08:15:04.807198 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77e31dbf-0ae6-4cb1-a98a-8181f91dbad2" path="/var/lib/kubelet/pods/77e31dbf-0ae6-4cb1-a98a-8181f91dbad2/volumes" Oct 14 08:15:11 crc kubenswrapper[5058]: I1014 08:15:11.790115 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:15:11 crc kubenswrapper[5058]: E1014 08:15:11.790819 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:15:22 crc kubenswrapper[5058]: I1014 08:15:22.798383 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:15:22 crc kubenswrapper[5058]: E1014 08:15:22.799394 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:15:34 crc kubenswrapper[5058]: I1014 08:15:34.790665 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:15:34 crc kubenswrapper[5058]: E1014 08:15:34.791683 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:15:39 crc kubenswrapper[5058]: I1014 08:15:39.504339 5058 scope.go:117] "RemoveContainer" containerID="233be8a73a8db64a75752c45428dddd58d7bf366a152bcc671df57c0a0ad6640" Oct 14 08:15:48 crc kubenswrapper[5058]: I1014 08:15:48.790050 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:15:48 crc kubenswrapper[5058]: E1014 08:15:48.791067 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:16:03 crc kubenswrapper[5058]: I1014 08:16:03.790346 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:16:04 crc kubenswrapper[5058]: I1014 08:16:04.119042 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"0fd1ee6d8ac33603c715bd710416602c3b5ef435ffeb7bd093aafbd17f254725"} Oct 14 08:17:07 crc kubenswrapper[5058]: I1014 08:17:07.940837 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84r8r"] Oct 14 08:17:07 crc kubenswrapper[5058]: E1014 08:17:07.942745 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e56306-4fb5-4641-a8c1-3b9846da69e4" containerName="collect-profiles" Oct 14 08:17:07 crc kubenswrapper[5058]: I1014 08:17:07.942782 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e56306-4fb5-4641-a8c1-3b9846da69e4" containerName="collect-profiles" Oct 14 08:17:07 crc kubenswrapper[5058]: I1014 08:17:07.943164 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e56306-4fb5-4641-a8c1-3b9846da69e4" containerName="collect-profiles" Oct 14 08:17:07 crc kubenswrapper[5058]: I1014 08:17:07.951389 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:07 crc kubenswrapper[5058]: I1014 08:17:07.957118 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84r8r"] Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.066110 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26xrx\" (UniqueName: \"kubernetes.io/projected/406d2bba-a428-49a4-a7df-b2cbfdd72244-kube-api-access-26xrx\") pod \"certified-operators-84r8r\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.066169 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-catalog-content\") pod \"certified-operators-84r8r\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.066189 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-utilities\") pod \"certified-operators-84r8r\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.167964 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-utilities\") pod \"certified-operators-84r8r\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.168110 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26xrx\" (UniqueName: \"kubernetes.io/projected/406d2bba-a428-49a4-a7df-b2cbfdd72244-kube-api-access-26xrx\") pod \"certified-operators-84r8r\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.168162 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-catalog-content\") pod \"certified-operators-84r8r\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.168779 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-catalog-content\") pod \"certified-operators-84r8r\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.168776 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-utilities\") pod \"certified-operators-84r8r\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.194064 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26xrx\" (UniqueName: \"kubernetes.io/projected/406d2bba-a428-49a4-a7df-b2cbfdd72244-kube-api-access-26xrx\") pod \"certified-operators-84r8r\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.277359 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:08 crc kubenswrapper[5058]: I1014 08:17:08.807145 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84r8r"] Oct 14 08:17:08 crc kubenswrapper[5058]: W1014 08:17:08.816621 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod406d2bba_a428_49a4_a7df_b2cbfdd72244.slice/crio-782e4ab12a2600482b6b2a5a19bb798c3b228ddb5ee8a5b97bafc3bf86df7808 WatchSource:0}: Error finding container 782e4ab12a2600482b6b2a5a19bb798c3b228ddb5ee8a5b97bafc3bf86df7808: Status 404 returned error can't find the container with id 782e4ab12a2600482b6b2a5a19bb798c3b228ddb5ee8a5b97bafc3bf86df7808 Oct 14 08:17:09 crc kubenswrapper[5058]: I1014 08:17:09.698005 5058 generic.go:334] "Generic (PLEG): container finished" podID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerID="8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace" exitCode=0 Oct 14 08:17:09 crc kubenswrapper[5058]: I1014 08:17:09.698120 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84r8r" event={"ID":"406d2bba-a428-49a4-a7df-b2cbfdd72244","Type":"ContainerDied","Data":"8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace"} Oct 14 08:17:09 crc kubenswrapper[5058]: I1014 08:17:09.698503 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84r8r" event={"ID":"406d2bba-a428-49a4-a7df-b2cbfdd72244","Type":"ContainerStarted","Data":"782e4ab12a2600482b6b2a5a19bb798c3b228ddb5ee8a5b97bafc3bf86df7808"} Oct 14 08:17:09 crc kubenswrapper[5058]: I1014 08:17:09.701233 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 08:17:11 crc kubenswrapper[5058]: I1014 08:17:11.722589 5058 generic.go:334] "Generic (PLEG): container finished" podID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerID="b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7" exitCode=0 Oct 14 08:17:11 crc kubenswrapper[5058]: I1014 08:17:11.722956 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84r8r" event={"ID":"406d2bba-a428-49a4-a7df-b2cbfdd72244","Type":"ContainerDied","Data":"b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7"} Oct 14 08:17:12 crc kubenswrapper[5058]: I1014 08:17:12.735085 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84r8r" event={"ID":"406d2bba-a428-49a4-a7df-b2cbfdd72244","Type":"ContainerStarted","Data":"5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed"} Oct 14 08:17:12 crc kubenswrapper[5058]: I1014 08:17:12.762392 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84r8r" podStartSLOduration=3.094409638 podStartE2EDuration="5.76235989s" podCreationTimestamp="2025-10-14 08:17:07 +0000 UTC" firstStartedPulling="2025-10-14 08:17:09.700627545 +0000 UTC m=+5377.611711381" lastFinishedPulling="2025-10-14 08:17:12.368577787 +0000 UTC m=+5380.279661633" observedRunningTime="2025-10-14 08:17:12.758310396 +0000 UTC m=+5380.669394262" watchObservedRunningTime="2025-10-14 08:17:12.76235989 +0000 UTC m=+5380.673443736" Oct 14 08:17:18 crc kubenswrapper[5058]: I1014 08:17:18.277743 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:18 crc kubenswrapper[5058]: I1014 08:17:18.278984 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:18 crc kubenswrapper[5058]: I1014 08:17:18.340386 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:18 crc kubenswrapper[5058]: I1014 08:17:18.915751 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:18 crc kubenswrapper[5058]: I1014 08:17:18.987064 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84r8r"] Oct 14 08:17:20 crc kubenswrapper[5058]: I1014 08:17:20.823046 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-84r8r" podUID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerName="registry-server" containerID="cri-o://5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed" gracePeriod=2 Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.286508 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.426007 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-catalog-content\") pod \"406d2bba-a428-49a4-a7df-b2cbfdd72244\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.426446 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-utilities\") pod \"406d2bba-a428-49a4-a7df-b2cbfdd72244\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.426546 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrx\" (UniqueName: \"kubernetes.io/projected/406d2bba-a428-49a4-a7df-b2cbfdd72244-kube-api-access-26xrx\") pod \"406d2bba-a428-49a4-a7df-b2cbfdd72244\" (UID: \"406d2bba-a428-49a4-a7df-b2cbfdd72244\") " Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.427387 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-utilities" (OuterVolumeSpecName: "utilities") pod "406d2bba-a428-49a4-a7df-b2cbfdd72244" (UID: "406d2bba-a428-49a4-a7df-b2cbfdd72244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.433076 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406d2bba-a428-49a4-a7df-b2cbfdd72244-kube-api-access-26xrx" (OuterVolumeSpecName: "kube-api-access-26xrx") pod "406d2bba-a428-49a4-a7df-b2cbfdd72244" (UID: "406d2bba-a428-49a4-a7df-b2cbfdd72244"). InnerVolumeSpecName "kube-api-access-26xrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.526449 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "406d2bba-a428-49a4-a7df-b2cbfdd72244" (UID: "406d2bba-a428-49a4-a7df-b2cbfdd72244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.527995 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26xrx\" (UniqueName: \"kubernetes.io/projected/406d2bba-a428-49a4-a7df-b2cbfdd72244-kube-api-access-26xrx\") on node \"crc\" DevicePath \"\"" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.528034 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.528048 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d2bba-a428-49a4-a7df-b2cbfdd72244-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.832564 5058 generic.go:334] "Generic (PLEG): container finished" podID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerID="5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed" exitCode=0 Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.832637 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84r8r" event={"ID":"406d2bba-a428-49a4-a7df-b2cbfdd72244","Type":"ContainerDied","Data":"5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed"} Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.832650 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84r8r" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.832680 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84r8r" event={"ID":"406d2bba-a428-49a4-a7df-b2cbfdd72244","Type":"ContainerDied","Data":"782e4ab12a2600482b6b2a5a19bb798c3b228ddb5ee8a5b97bafc3bf86df7808"} Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.832732 5058 scope.go:117] "RemoveContainer" containerID="5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.853783 5058 scope.go:117] "RemoveContainer" containerID="b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.863202 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84r8r"] Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.868281 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-84r8r"] Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.878883 5058 scope.go:117] "RemoveContainer" containerID="8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.921191 5058 scope.go:117] "RemoveContainer" containerID="5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed" Oct 14 08:17:21 crc kubenswrapper[5058]: E1014 08:17:21.921739 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed\": container with ID starting with 5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed not found: ID does not exist" containerID="5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.921791 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed"} err="failed to get container status \"5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed\": rpc error: code = NotFound desc = could not find container \"5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed\": container with ID starting with 5d295de2abb315793f70d3bd24e43dfd4e9c5eca8a8aa24c6a2a19b63f76f9ed not found: ID does not exist" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.921910 5058 scope.go:117] "RemoveContainer" containerID="b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7" Oct 14 08:17:21 crc kubenswrapper[5058]: E1014 08:17:21.922420 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7\": container with ID starting with b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7 not found: ID does not exist" containerID="b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.922454 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7"} err="failed to get container status \"b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7\": rpc error: code = NotFound desc = could not find container \"b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7\": container with ID starting with b1a913ad0964538566ef1d8dbdedab1014e5832237d1cd7717a00bfdf78d52b7 not found: ID does not exist" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.922474 5058 scope.go:117] "RemoveContainer" containerID="8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace" Oct 14 08:17:21 crc kubenswrapper[5058]: E1014 08:17:21.922910 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace\": container with ID starting with 8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace not found: ID does not exist" containerID="8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace" Oct 14 08:17:21 crc kubenswrapper[5058]: I1014 08:17:21.922945 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace"} err="failed to get container status \"8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace\": rpc error: code = NotFound desc = could not find container \"8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace\": container with ID starting with 8aa97afe1e937c4051a7cbcddf059b6ad7e295664f8bea2a60f3d11e5fad3ace not found: ID does not exist" Oct 14 08:17:22 crc kubenswrapper[5058]: I1014 08:17:22.807154 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406d2bba-a428-49a4-a7df-b2cbfdd72244" path="/var/lib/kubelet/pods/406d2bba-a428-49a4-a7df-b2cbfdd72244/volumes" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.076100 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ldcn4"] Oct 14 08:17:47 crc kubenswrapper[5058]: E1014 08:17:47.077517 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerName="extract-content" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.077550 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerName="extract-content" Oct 14 08:17:47 crc kubenswrapper[5058]: E1014 08:17:47.077597 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerName="extract-utilities" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.077615 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerName="extract-utilities" Oct 14 08:17:47 crc kubenswrapper[5058]: E1014 08:17:47.077678 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerName="registry-server" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.077696 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerName="registry-server" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.078080 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="406d2bba-a428-49a4-a7df-b2cbfdd72244" containerName="registry-server" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.080722 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.085544 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldcn4"] Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.105448 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-utilities\") pod \"redhat-operators-ldcn4\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.105523 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxnzx\" (UniqueName: \"kubernetes.io/projected/b1e3bc24-c475-4fa0-8d93-b621082b85ed-kube-api-access-pxnzx\") pod \"redhat-operators-ldcn4\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.105582 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-catalog-content\") pod \"redhat-operators-ldcn4\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.206422 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-utilities\") pod \"redhat-operators-ldcn4\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.206464 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxnzx\" (UniqueName: \"kubernetes.io/projected/b1e3bc24-c475-4fa0-8d93-b621082b85ed-kube-api-access-pxnzx\") pod \"redhat-operators-ldcn4\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.206495 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-catalog-content\") pod \"redhat-operators-ldcn4\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.206901 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-catalog-content\") pod \"redhat-operators-ldcn4\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.207248 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-utilities\") pod \"redhat-operators-ldcn4\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.232980 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxnzx\" (UniqueName: \"kubernetes.io/projected/b1e3bc24-c475-4fa0-8d93-b621082b85ed-kube-api-access-pxnzx\") pod \"redhat-operators-ldcn4\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.401551 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:47 crc kubenswrapper[5058]: I1014 08:17:47.946121 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldcn4"] Oct 14 08:17:48 crc kubenswrapper[5058]: I1014 08:17:48.090511 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldcn4" event={"ID":"b1e3bc24-c475-4fa0-8d93-b621082b85ed","Type":"ContainerStarted","Data":"3767090d4dd6a0f41217477ff333cf6baf28060aa3891092b873c8038c59962c"} Oct 14 08:17:49 crc kubenswrapper[5058]: I1014 08:17:49.101482 5058 generic.go:334] "Generic (PLEG): container finished" podID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerID="aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233" exitCode=0 Oct 14 08:17:49 crc kubenswrapper[5058]: I1014 08:17:49.101546 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldcn4" event={"ID":"b1e3bc24-c475-4fa0-8d93-b621082b85ed","Type":"ContainerDied","Data":"aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233"} Oct 14 08:17:50 crc kubenswrapper[5058]: I1014 08:17:50.120591 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldcn4" event={"ID":"b1e3bc24-c475-4fa0-8d93-b621082b85ed","Type":"ContainerStarted","Data":"8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e"} Oct 14 08:17:51 crc kubenswrapper[5058]: I1014 08:17:51.129487 5058 generic.go:334] "Generic (PLEG): container finished" podID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerID="8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e" exitCode=0 Oct 14 08:17:51 crc kubenswrapper[5058]: I1014 08:17:51.129523 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldcn4" event={"ID":"b1e3bc24-c475-4fa0-8d93-b621082b85ed","Type":"ContainerDied","Data":"8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e"} Oct 14 08:17:52 crc kubenswrapper[5058]: I1014 08:17:52.141005 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldcn4" event={"ID":"b1e3bc24-c475-4fa0-8d93-b621082b85ed","Type":"ContainerStarted","Data":"6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73"} Oct 14 08:17:52 crc kubenswrapper[5058]: I1014 08:17:52.162764 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ldcn4" podStartSLOduration=2.671138961 podStartE2EDuration="5.162740036s" podCreationTimestamp="2025-10-14 08:17:47 +0000 UTC" firstStartedPulling="2025-10-14 08:17:49.10405347 +0000 UTC m=+5417.015137286" lastFinishedPulling="2025-10-14 08:17:51.595654555 +0000 UTC m=+5419.506738361" observedRunningTime="2025-10-14 08:17:52.15898531 +0000 UTC m=+5420.070069156" watchObservedRunningTime="2025-10-14 08:17:52.162740036 +0000 UTC m=+5420.073823842" Oct 14 08:17:57 crc kubenswrapper[5058]: I1014 08:17:57.402245 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:57 crc kubenswrapper[5058]: I1014 08:17:57.402871 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:57 crc kubenswrapper[5058]: I1014 08:17:57.481685 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:58 crc kubenswrapper[5058]: I1014 08:17:58.267830 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:17:58 crc kubenswrapper[5058]: I1014 08:17:58.344656 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldcn4"] Oct 14 08:18:00 crc kubenswrapper[5058]: I1014 08:18:00.210720 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ldcn4" podUID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerName="registry-server" containerID="cri-o://6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73" gracePeriod=2 Oct 14 08:18:00 crc kubenswrapper[5058]: I1014 08:18:00.728380 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:18:00 crc kubenswrapper[5058]: I1014 08:18:00.760245 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxnzx\" (UniqueName: \"kubernetes.io/projected/b1e3bc24-c475-4fa0-8d93-b621082b85ed-kube-api-access-pxnzx\") pod \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " Oct 14 08:18:00 crc kubenswrapper[5058]: I1014 08:18:00.760691 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-catalog-content\") pod \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " Oct 14 08:18:00 crc kubenswrapper[5058]: I1014 08:18:00.760779 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-utilities\") pod \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\" (UID: \"b1e3bc24-c475-4fa0-8d93-b621082b85ed\") " Oct 14 08:18:00 crc kubenswrapper[5058]: I1014 08:18:00.765297 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-utilities" (OuterVolumeSpecName: "utilities") pod "b1e3bc24-c475-4fa0-8d93-b621082b85ed" (UID: "b1e3bc24-c475-4fa0-8d93-b621082b85ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:18:00 crc kubenswrapper[5058]: I1014 08:18:00.769994 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e3bc24-c475-4fa0-8d93-b621082b85ed-kube-api-access-pxnzx" (OuterVolumeSpecName: "kube-api-access-pxnzx") pod "b1e3bc24-c475-4fa0-8d93-b621082b85ed" (UID: "b1e3bc24-c475-4fa0-8d93-b621082b85ed"). InnerVolumeSpecName "kube-api-access-pxnzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:18:00 crc kubenswrapper[5058]: I1014 08:18:00.863565 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxnzx\" (UniqueName: \"kubernetes.io/projected/b1e3bc24-c475-4fa0-8d93-b621082b85ed-kube-api-access-pxnzx\") on node \"crc\" DevicePath \"\"" Oct 14 08:18:00 crc kubenswrapper[5058]: I1014 08:18:00.863607 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.221125 5058 generic.go:334] "Generic (PLEG): container finished" podID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerID="6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73" exitCode=0 Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.221242 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldcn4" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.222138 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldcn4" event={"ID":"b1e3bc24-c475-4fa0-8d93-b621082b85ed","Type":"ContainerDied","Data":"6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73"} Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.222331 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldcn4" event={"ID":"b1e3bc24-c475-4fa0-8d93-b621082b85ed","Type":"ContainerDied","Data":"3767090d4dd6a0f41217477ff333cf6baf28060aa3891092b873c8038c59962c"} Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.222387 5058 scope.go:117] "RemoveContainer" containerID="6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.249994 5058 scope.go:117] "RemoveContainer" containerID="8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.279069 5058 scope.go:117] "RemoveContainer" containerID="aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.325742 5058 scope.go:117] "RemoveContainer" containerID="6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73" Oct 14 08:18:01 crc kubenswrapper[5058]: E1014 08:18:01.326404 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73\": container with ID starting with 6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73 not found: ID does not exist" containerID="6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.326460 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73"} err="failed to get container status \"6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73\": rpc error: code = NotFound desc = could not find container \"6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73\": container with ID starting with 6da2364013726fb38ca361b3f1d9fccce5b22a6e90ee6fd98e98f46592c0de73 not found: ID does not exist" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.326498 5058 scope.go:117] "RemoveContainer" containerID="8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e" Oct 14 08:18:01 crc kubenswrapper[5058]: E1014 08:18:01.327277 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e\": container with ID starting with 8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e not found: ID does not exist" containerID="8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.327475 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e"} err="failed to get container status \"8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e\": rpc error: code = NotFound desc = could not find container \"8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e\": container with ID starting with 8aa5bb6c9eb056744d98104f3647e32e58133add092977a851f4dd774908737e not found: ID does not exist" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.327640 5058 scope.go:117] "RemoveContainer" containerID="aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233" Oct 14 08:18:01 crc kubenswrapper[5058]: E1014 08:18:01.328308 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233\": container with ID starting with aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233 not found: ID does not exist" containerID="aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233" Oct 14 08:18:01 crc kubenswrapper[5058]: I1014 08:18:01.328368 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233"} err="failed to get container status \"aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233\": rpc error: code = NotFound desc = could not find container \"aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233\": container with ID starting with aba8717c90476dd7387289360b05b16f5b2cdf3331673b526119865051b6f233 not found: ID does not exist" Oct 14 08:18:02 crc kubenswrapper[5058]: I1014 08:18:02.014842 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1e3bc24-c475-4fa0-8d93-b621082b85ed" (UID: "b1e3bc24-c475-4fa0-8d93-b621082b85ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:18:02 crc kubenswrapper[5058]: I1014 08:18:02.082237 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e3bc24-c475-4fa0-8d93-b621082b85ed-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:18:02 crc kubenswrapper[5058]: I1014 08:18:02.183967 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldcn4"] Oct 14 08:18:02 crc kubenswrapper[5058]: I1014 08:18:02.195765 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ldcn4"] Oct 14 08:18:02 crc kubenswrapper[5058]: I1014 08:18:02.803176 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" path="/var/lib/kubelet/pods/b1e3bc24-c475-4fa0-8d93-b621082b85ed/volumes" Oct 14 08:18:33 crc kubenswrapper[5058]: I1014 08:18:33.656603 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:18:33 crc kubenswrapper[5058]: I1014 08:18:33.658105 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:19:03 crc kubenswrapper[5058]: I1014 08:19:03.655921 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:19:03 crc kubenswrapper[5058]: I1014 08:19:03.656748 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:19:33 crc kubenswrapper[5058]: I1014 08:19:33.656444 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:19:33 crc kubenswrapper[5058]: I1014 08:19:33.657243 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:19:33 crc kubenswrapper[5058]: I1014 08:19:33.657307 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:19:33 crc kubenswrapper[5058]: I1014 08:19:33.658180 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fd1ee6d8ac33603c715bd710416602c3b5ef435ffeb7bd093aafbd17f254725"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:19:33 crc kubenswrapper[5058]: I1014 08:19:33.658276 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://0fd1ee6d8ac33603c715bd710416602c3b5ef435ffeb7bd093aafbd17f254725" gracePeriod=600 Oct 14 08:19:33 crc kubenswrapper[5058]: E1014 08:19:33.849554 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64184db4_5b6d_4aa8_b780_c9f6163af3d8.slice/crio-0fd1ee6d8ac33603c715bd710416602c3b5ef435ffeb7bd093aafbd17f254725.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64184db4_5b6d_4aa8_b780_c9f6163af3d8.slice/crio-conmon-0fd1ee6d8ac33603c715bd710416602c3b5ef435ffeb7bd093aafbd17f254725.scope\": RecentStats: unable to find data in memory cache]" Oct 14 08:19:34 crc kubenswrapper[5058]: I1014 08:19:34.116040 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="0fd1ee6d8ac33603c715bd710416602c3b5ef435ffeb7bd093aafbd17f254725" exitCode=0 Oct 14 08:19:34 crc kubenswrapper[5058]: I1014 08:19:34.116109 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"0fd1ee6d8ac33603c715bd710416602c3b5ef435ffeb7bd093aafbd17f254725"} Oct 14 08:19:34 crc kubenswrapper[5058]: I1014 08:19:34.116616 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447"} Oct 14 08:19:34 crc kubenswrapper[5058]: I1014 08:19:34.116664 5058 scope.go:117] "RemoveContainer" containerID="8321cce3b23a450af048f78a86331838a1f4d702e87ca02011418f9aa3ffe1fc" Oct 14 08:19:50 crc kubenswrapper[5058]: I1014 08:19:50.976748 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hbw26"] Oct 14 08:19:50 crc kubenswrapper[5058]: E1014 08:19:50.979340 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerName="extract-utilities" Oct 14 08:19:50 crc kubenswrapper[5058]: I1014 08:19:50.979504 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerName="extract-utilities" Oct 14 08:19:50 crc kubenswrapper[5058]: E1014 08:19:50.979650 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerName="extract-content" Oct 14 08:19:50 crc kubenswrapper[5058]: I1014 08:19:50.979786 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerName="extract-content" Oct 14 08:19:50 crc kubenswrapper[5058]: E1014 08:19:50.979983 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerName="registry-server" Oct 14 08:19:50 crc kubenswrapper[5058]: I1014 08:19:50.980142 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerName="registry-server" Oct 14 08:19:50 crc kubenswrapper[5058]: I1014 08:19:50.980551 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e3bc24-c475-4fa0-8d93-b621082b85ed" containerName="registry-server" Oct 14 08:19:50 crc kubenswrapper[5058]: I1014 08:19:50.985105 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.004278 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbw26"] Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.180126 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-catalog-content\") pod \"community-operators-hbw26\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.180227 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-utilities\") pod \"community-operators-hbw26\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.180267 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52l42\" (UniqueName: \"kubernetes.io/projected/dcc0937c-827d-44c0-abb3-73611a511343-kube-api-access-52l42\") pod \"community-operators-hbw26\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.281758 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-catalog-content\") pod \"community-operators-hbw26\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.282120 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-utilities\") pod \"community-operators-hbw26\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.282144 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52l42\" (UniqueName: \"kubernetes.io/projected/dcc0937c-827d-44c0-abb3-73611a511343-kube-api-access-52l42\") pod \"community-operators-hbw26\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.282441 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-catalog-content\") pod \"community-operators-hbw26\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.282578 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-utilities\") pod \"community-operators-hbw26\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.302124 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52l42\" (UniqueName: \"kubernetes.io/projected/dcc0937c-827d-44c0-abb3-73611a511343-kube-api-access-52l42\") pod \"community-operators-hbw26\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.316025 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:19:51 crc kubenswrapper[5058]: I1014 08:19:51.817186 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbw26"] Oct 14 08:19:51 crc kubenswrapper[5058]: W1014 08:19:51.826176 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcc0937c_827d_44c0_abb3_73611a511343.slice/crio-86b7ed6e8697266d04294480729e68e97ddff4e5c61619c4763946fb18d91596 WatchSource:0}: Error finding container 86b7ed6e8697266d04294480729e68e97ddff4e5c61619c4763946fb18d91596: Status 404 returned error can't find the container with id 86b7ed6e8697266d04294480729e68e97ddff4e5c61619c4763946fb18d91596 Oct 14 08:19:52 crc kubenswrapper[5058]: I1014 08:19:52.324643 5058 generic.go:334] "Generic (PLEG): container finished" podID="dcc0937c-827d-44c0-abb3-73611a511343" containerID="4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6" exitCode=0 Oct 14 08:19:52 crc kubenswrapper[5058]: I1014 08:19:52.324735 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbw26" event={"ID":"dcc0937c-827d-44c0-abb3-73611a511343","Type":"ContainerDied","Data":"4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6"} Oct 14 08:19:52 crc kubenswrapper[5058]: I1014 08:19:52.324787 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbw26" event={"ID":"dcc0937c-827d-44c0-abb3-73611a511343","Type":"ContainerStarted","Data":"86b7ed6e8697266d04294480729e68e97ddff4e5c61619c4763946fb18d91596"} Oct 14 08:19:53 crc kubenswrapper[5058]: I1014 08:19:53.346642 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbw26" event={"ID":"dcc0937c-827d-44c0-abb3-73611a511343","Type":"ContainerStarted","Data":"01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2"} Oct 14 08:19:54 crc kubenswrapper[5058]: I1014 08:19:54.359927 5058 generic.go:334] "Generic (PLEG): container finished" podID="dcc0937c-827d-44c0-abb3-73611a511343" containerID="01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2" exitCode=0 Oct 14 08:19:54 crc kubenswrapper[5058]: I1014 08:19:54.360147 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbw26" event={"ID":"dcc0937c-827d-44c0-abb3-73611a511343","Type":"ContainerDied","Data":"01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2"} Oct 14 08:19:55 crc kubenswrapper[5058]: I1014 08:19:55.375381 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbw26" event={"ID":"dcc0937c-827d-44c0-abb3-73611a511343","Type":"ContainerStarted","Data":"dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b"} Oct 14 08:19:55 crc kubenswrapper[5058]: I1014 08:19:55.402198 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hbw26" podStartSLOduration=2.8428698470000002 podStartE2EDuration="5.402179242s" podCreationTimestamp="2025-10-14 08:19:50 +0000 UTC" firstStartedPulling="2025-10-14 08:19:52.326939065 +0000 UTC m=+5540.238022871" lastFinishedPulling="2025-10-14 08:19:54.88624846 +0000 UTC m=+5542.797332266" observedRunningTime="2025-10-14 08:19:55.401449341 +0000 UTC m=+5543.312533217" watchObservedRunningTime="2025-10-14 08:19:55.402179242 +0000 UTC m=+5543.313263058" Oct 14 08:20:01 crc kubenswrapper[5058]: I1014 08:20:01.316260 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:20:01 crc kubenswrapper[5058]: I1014 08:20:01.316969 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:20:01 crc kubenswrapper[5058]: I1014 08:20:01.380890 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:20:01 crc kubenswrapper[5058]: I1014 08:20:01.486189 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:20:01 crc kubenswrapper[5058]: I1014 08:20:01.619630 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbw26"] Oct 14 08:20:03 crc kubenswrapper[5058]: I1014 08:20:03.451387 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hbw26" podUID="dcc0937c-827d-44c0-abb3-73611a511343" containerName="registry-server" containerID="cri-o://dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b" gracePeriod=2 Oct 14 08:20:03 crc kubenswrapper[5058]: I1014 08:20:03.941668 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.082503 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52l42\" (UniqueName: \"kubernetes.io/projected/dcc0937c-827d-44c0-abb3-73611a511343-kube-api-access-52l42\") pod \"dcc0937c-827d-44c0-abb3-73611a511343\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.082708 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-utilities\") pod \"dcc0937c-827d-44c0-abb3-73611a511343\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.082738 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-catalog-content\") pod \"dcc0937c-827d-44c0-abb3-73611a511343\" (UID: \"dcc0937c-827d-44c0-abb3-73611a511343\") " Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.084224 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-utilities" (OuterVolumeSpecName: "utilities") pod "dcc0937c-827d-44c0-abb3-73611a511343" (UID: "dcc0937c-827d-44c0-abb3-73611a511343"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.090375 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc0937c-827d-44c0-abb3-73611a511343-kube-api-access-52l42" (OuterVolumeSpecName: "kube-api-access-52l42") pod "dcc0937c-827d-44c0-abb3-73611a511343" (UID: "dcc0937c-827d-44c0-abb3-73611a511343"). InnerVolumeSpecName "kube-api-access-52l42". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.145860 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcc0937c-827d-44c0-abb3-73611a511343" (UID: "dcc0937c-827d-44c0-abb3-73611a511343"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.184511 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.184541 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcc0937c-827d-44c0-abb3-73611a511343-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.184552 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52l42\" (UniqueName: \"kubernetes.io/projected/dcc0937c-827d-44c0-abb3-73611a511343-kube-api-access-52l42\") on node \"crc\" DevicePath \"\"" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.462202 5058 generic.go:334] "Generic (PLEG): container finished" podID="dcc0937c-827d-44c0-abb3-73611a511343" containerID="dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b" exitCode=0 Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.462268 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbw26" event={"ID":"dcc0937c-827d-44c0-abb3-73611a511343","Type":"ContainerDied","Data":"dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b"} Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.462313 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbw26" event={"ID":"dcc0937c-827d-44c0-abb3-73611a511343","Type":"ContainerDied","Data":"86b7ed6e8697266d04294480729e68e97ddff4e5c61619c4763946fb18d91596"} Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.462346 5058 scope.go:117] "RemoveContainer" containerID="dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.462547 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbw26" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.491707 5058 scope.go:117] "RemoveContainer" containerID="01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.545305 5058 scope.go:117] "RemoveContainer" containerID="4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.546973 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbw26"] Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.554782 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hbw26"] Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.568022 5058 scope.go:117] "RemoveContainer" containerID="dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b" Oct 14 08:20:04 crc kubenswrapper[5058]: E1014 08:20:04.568526 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b\": container with ID starting with dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b not found: ID does not exist" containerID="dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.568576 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b"} err="failed to get container status \"dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b\": rpc error: code = NotFound desc = could not find container \"dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b\": container with ID starting with dc5d6d0a94b97f9fd6f2c47676ff43d3a46317b71d9f12b5ebb30427e5c24a6b not found: ID does not exist" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.568602 5058 scope.go:117] "RemoveContainer" containerID="01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2" Oct 14 08:20:04 crc kubenswrapper[5058]: E1014 08:20:04.569090 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2\": container with ID starting with 01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2 not found: ID does not exist" containerID="01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.569132 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2"} err="failed to get container status \"01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2\": rpc error: code = NotFound desc = could not find container \"01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2\": container with ID starting with 01dcbb38fcb2966b10b92953825bbc9188901db678704e9e6f84cbdc3ea7dca2 not found: ID does not exist" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.569165 5058 scope.go:117] "RemoveContainer" containerID="4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6" Oct 14 08:20:04 crc kubenswrapper[5058]: E1014 08:20:04.569535 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6\": container with ID starting with 4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6 not found: ID does not exist" containerID="4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.569580 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6"} err="failed to get container status \"4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6\": rpc error: code = NotFound desc = could not find container \"4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6\": container with ID starting with 4bacde9f001d9703e547ad27dec6587607b2c72429a2fa7c442e863a6e1949e6 not found: ID does not exist" Oct 14 08:20:04 crc kubenswrapper[5058]: E1014 08:20:04.684429 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcc0937c_827d_44c0_abb3_73611a511343.slice\": RecentStats: unable to find data in memory cache]" Oct 14 08:20:04 crc kubenswrapper[5058]: I1014 08:20:04.804677 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc0937c-827d-44c0-abb3-73611a511343" path="/var/lib/kubelet/pods/dcc0937c-827d-44c0-abb3-73611a511343/volumes" Oct 14 08:21:16 crc kubenswrapper[5058]: I1014 08:21:16.901767 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nt6cw"] Oct 14 08:21:16 crc kubenswrapper[5058]: E1014 08:21:16.902966 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc0937c-827d-44c0-abb3-73611a511343" containerName="registry-server" Oct 14 08:21:16 crc kubenswrapper[5058]: I1014 08:21:16.902983 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc0937c-827d-44c0-abb3-73611a511343" containerName="registry-server" Oct 14 08:21:16 crc kubenswrapper[5058]: E1014 08:21:16.902996 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc0937c-827d-44c0-abb3-73611a511343" containerName="extract-utilities" Oct 14 08:21:16 crc kubenswrapper[5058]: I1014 08:21:16.903004 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc0937c-827d-44c0-abb3-73611a511343" containerName="extract-utilities" Oct 14 08:21:16 crc kubenswrapper[5058]: E1014 08:21:16.903025 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc0937c-827d-44c0-abb3-73611a511343" containerName="extract-content" Oct 14 08:21:16 crc kubenswrapper[5058]: I1014 08:21:16.903032 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc0937c-827d-44c0-abb3-73611a511343" containerName="extract-content" Oct 14 08:21:16 crc kubenswrapper[5058]: I1014 08:21:16.903394 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc0937c-827d-44c0-abb3-73611a511343" containerName="registry-server" Oct 14 08:21:16 crc kubenswrapper[5058]: I1014 08:21:16.904614 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:16 crc kubenswrapper[5058]: I1014 08:21:16.916048 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt6cw"] Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.085334 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-catalog-content\") pod \"redhat-marketplace-nt6cw\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.085397 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-utilities\") pod \"redhat-marketplace-nt6cw\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.085524 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g25xp\" (UniqueName: \"kubernetes.io/projected/32d410fd-9ff8-451c-ba53-dd9278766aa7-kube-api-access-g25xp\") pod \"redhat-marketplace-nt6cw\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.186415 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-catalog-content\") pod \"redhat-marketplace-nt6cw\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.186481 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-utilities\") pod \"redhat-marketplace-nt6cw\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.186523 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g25xp\" (UniqueName: \"kubernetes.io/projected/32d410fd-9ff8-451c-ba53-dd9278766aa7-kube-api-access-g25xp\") pod \"redhat-marketplace-nt6cw\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.187215 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-utilities\") pod \"redhat-marketplace-nt6cw\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.187228 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-catalog-content\") pod \"redhat-marketplace-nt6cw\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.215050 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g25xp\" (UniqueName: \"kubernetes.io/projected/32d410fd-9ff8-451c-ba53-dd9278766aa7-kube-api-access-g25xp\") pod \"redhat-marketplace-nt6cw\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.239766 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:17 crc kubenswrapper[5058]: I1014 08:21:17.722577 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt6cw"] Oct 14 08:21:18 crc kubenswrapper[5058]: I1014 08:21:18.171510 5058 generic.go:334] "Generic (PLEG): container finished" podID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerID="45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36" exitCode=0 Oct 14 08:21:18 crc kubenswrapper[5058]: I1014 08:21:18.171576 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt6cw" event={"ID":"32d410fd-9ff8-451c-ba53-dd9278766aa7","Type":"ContainerDied","Data":"45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36"} Oct 14 08:21:18 crc kubenswrapper[5058]: I1014 08:21:18.171615 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt6cw" event={"ID":"32d410fd-9ff8-451c-ba53-dd9278766aa7","Type":"ContainerStarted","Data":"0810e0851759a3800ef80444c4df3c3c786a10177acfb9bda0951913a5476b42"} Oct 14 08:21:20 crc kubenswrapper[5058]: I1014 08:21:20.196641 5058 generic.go:334] "Generic (PLEG): container finished" podID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerID="3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463" exitCode=0 Oct 14 08:21:20 crc kubenswrapper[5058]: I1014 08:21:20.197281 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt6cw" event={"ID":"32d410fd-9ff8-451c-ba53-dd9278766aa7","Type":"ContainerDied","Data":"3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463"} Oct 14 08:21:21 crc kubenswrapper[5058]: I1014 08:21:21.210183 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt6cw" event={"ID":"32d410fd-9ff8-451c-ba53-dd9278766aa7","Type":"ContainerStarted","Data":"e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2"} Oct 14 08:21:21 crc kubenswrapper[5058]: I1014 08:21:21.239375 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nt6cw" podStartSLOduration=2.7717738990000003 podStartE2EDuration="5.239351757s" podCreationTimestamp="2025-10-14 08:21:16 +0000 UTC" firstStartedPulling="2025-10-14 08:21:18.173734732 +0000 UTC m=+5626.084818568" lastFinishedPulling="2025-10-14 08:21:20.64131259 +0000 UTC m=+5628.552396426" observedRunningTime="2025-10-14 08:21:21.231747132 +0000 UTC m=+5629.142830968" watchObservedRunningTime="2025-10-14 08:21:21.239351757 +0000 UTC m=+5629.150435583" Oct 14 08:21:27 crc kubenswrapper[5058]: I1014 08:21:27.241074 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:27 crc kubenswrapper[5058]: I1014 08:21:27.241655 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:27 crc kubenswrapper[5058]: I1014 08:21:27.294134 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:27 crc kubenswrapper[5058]: I1014 08:21:27.371264 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:27 crc kubenswrapper[5058]: I1014 08:21:27.534948 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt6cw"] Oct 14 08:21:29 crc kubenswrapper[5058]: I1014 08:21:29.285482 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nt6cw" podUID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerName="registry-server" containerID="cri-o://e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2" gracePeriod=2 Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.719783 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.831848 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-catalog-content\") pod \"32d410fd-9ff8-451c-ba53-dd9278766aa7\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.831985 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-utilities\") pod \"32d410fd-9ff8-451c-ba53-dd9278766aa7\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.832085 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g25xp\" (UniqueName: \"kubernetes.io/projected/32d410fd-9ff8-451c-ba53-dd9278766aa7-kube-api-access-g25xp\") pod \"32d410fd-9ff8-451c-ba53-dd9278766aa7\" (UID: \"32d410fd-9ff8-451c-ba53-dd9278766aa7\") " Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.832720 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-utilities" (OuterVolumeSpecName: "utilities") pod "32d410fd-9ff8-451c-ba53-dd9278766aa7" (UID: "32d410fd-9ff8-451c-ba53-dd9278766aa7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.840255 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d410fd-9ff8-451c-ba53-dd9278766aa7-kube-api-access-g25xp" (OuterVolumeSpecName: "kube-api-access-g25xp") pod "32d410fd-9ff8-451c-ba53-dd9278766aa7" (UID: "32d410fd-9ff8-451c-ba53-dd9278766aa7"). InnerVolumeSpecName "kube-api-access-g25xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.845122 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32d410fd-9ff8-451c-ba53-dd9278766aa7" (UID: "32d410fd-9ff8-451c-ba53-dd9278766aa7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.933892 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.933923 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d410fd-9ff8-451c-ba53-dd9278766aa7-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:29.933933 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g25xp\" (UniqueName: \"kubernetes.io/projected/32d410fd-9ff8-451c-ba53-dd9278766aa7-kube-api-access-g25xp\") on node \"crc\" DevicePath \"\"" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.296535 5058 generic.go:334] "Generic (PLEG): container finished" podID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerID="e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2" exitCode=0 Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.296595 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt6cw" event={"ID":"32d410fd-9ff8-451c-ba53-dd9278766aa7","Type":"ContainerDied","Data":"e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2"} Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.296673 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt6cw" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.296718 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt6cw" event={"ID":"32d410fd-9ff8-451c-ba53-dd9278766aa7","Type":"ContainerDied","Data":"0810e0851759a3800ef80444c4df3c3c786a10177acfb9bda0951913a5476b42"} Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.296748 5058 scope.go:117] "RemoveContainer" containerID="e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.322731 5058 scope.go:117] "RemoveContainer" containerID="3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.333995 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt6cw"] Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.350198 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt6cw"] Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.373505 5058 scope.go:117] "RemoveContainer" containerID="45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.399151 5058 scope.go:117] "RemoveContainer" containerID="e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2" Oct 14 08:21:30 crc kubenswrapper[5058]: E1014 08:21:30.399967 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2\": container with ID starting with e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2 not found: ID does not exist" containerID="e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.400025 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2"} err="failed to get container status \"e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2\": rpc error: code = NotFound desc = could not find container \"e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2\": container with ID starting with e2073d2c0e985139d41b049effb013e692f84aec2b61834fdff27e3ea3eb83b2 not found: ID does not exist" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.400099 5058 scope.go:117] "RemoveContainer" containerID="3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463" Oct 14 08:21:30 crc kubenswrapper[5058]: E1014 08:21:30.400500 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463\": container with ID starting with 3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463 not found: ID does not exist" containerID="3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.400539 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463"} err="failed to get container status \"3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463\": rpc error: code = NotFound desc = could not find container \"3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463\": container with ID starting with 3cf7f549d0a252e633f6c04141c09ef7ad7fa8a0c9a699dfcecdf5ec0c9a6463 not found: ID does not exist" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.400566 5058 scope.go:117] "RemoveContainer" containerID="45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36" Oct 14 08:21:30 crc kubenswrapper[5058]: E1014 08:21:30.400842 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36\": container with ID starting with 45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36 not found: ID does not exist" containerID="45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.400875 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36"} err="failed to get container status \"45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36\": rpc error: code = NotFound desc = could not find container \"45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36\": container with ID starting with 45110d963665f7db81a28716d4fa860d87bc5f4e861b6fd36242af5d12990d36 not found: ID does not exist" Oct 14 08:21:30 crc kubenswrapper[5058]: I1014 08:21:30.805967 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d410fd-9ff8-451c-ba53-dd9278766aa7" path="/var/lib/kubelet/pods/32d410fd-9ff8-451c-ba53-dd9278766aa7/volumes" Oct 14 08:21:33 crc kubenswrapper[5058]: I1014 08:21:33.656291 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:21:33 crc kubenswrapper[5058]: I1014 08:21:33.657890 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:22:03 crc kubenswrapper[5058]: I1014 08:22:03.656569 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:22:03 crc kubenswrapper[5058]: I1014 08:22:03.657442 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:22:33 crc kubenswrapper[5058]: I1014 08:22:33.655963 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:22:33 crc kubenswrapper[5058]: I1014 08:22:33.656620 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:22:33 crc kubenswrapper[5058]: I1014 08:22:33.656674 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:22:33 crc kubenswrapper[5058]: I1014 08:22:33.657332 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:22:33 crc kubenswrapper[5058]: I1014 08:22:33.657399 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" gracePeriod=600 Oct 14 08:22:33 crc kubenswrapper[5058]: E1014 08:22:33.785309 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:22:33 crc kubenswrapper[5058]: I1014 08:22:33.867179 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" exitCode=0 Oct 14 08:22:33 crc kubenswrapper[5058]: I1014 08:22:33.867275 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447"} Oct 14 08:22:33 crc kubenswrapper[5058]: I1014 08:22:33.867461 5058 scope.go:117] "RemoveContainer" containerID="0fd1ee6d8ac33603c715bd710416602c3b5ef435ffeb7bd093aafbd17f254725" Oct 14 08:22:33 crc kubenswrapper[5058]: I1014 08:22:33.868490 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:22:33 crc kubenswrapper[5058]: E1014 08:22:33.869021 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:22:46 crc kubenswrapper[5058]: I1014 08:22:46.790530 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:22:46 crc kubenswrapper[5058]: E1014 08:22:46.791485 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:22:57 crc kubenswrapper[5058]: I1014 08:22:57.790326 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:22:57 crc kubenswrapper[5058]: E1014 08:22:57.791165 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:23:10 crc kubenswrapper[5058]: I1014 08:23:10.790052 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:23:10 crc kubenswrapper[5058]: E1014 08:23:10.790727 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:23:24 crc kubenswrapper[5058]: I1014 08:23:24.790788 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:23:24 crc kubenswrapper[5058]: E1014 08:23:24.791874 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:23:39 crc kubenswrapper[5058]: I1014 08:23:39.790000 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:23:39 crc kubenswrapper[5058]: E1014 08:23:39.791046 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:23:52 crc kubenswrapper[5058]: I1014 08:23:52.799404 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:23:52 crc kubenswrapper[5058]: E1014 08:23:52.800755 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:24:05 crc kubenswrapper[5058]: I1014 08:24:05.790141 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:24:05 crc kubenswrapper[5058]: E1014 08:24:05.791146 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:24:16 crc kubenswrapper[5058]: I1014 08:24:16.790412 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:24:16 crc kubenswrapper[5058]: E1014 08:24:16.792954 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:24:30 crc kubenswrapper[5058]: I1014 08:24:30.790178 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:24:30 crc kubenswrapper[5058]: E1014 08:24:30.790983 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:24:45 crc kubenswrapper[5058]: I1014 08:24:45.791132 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:24:45 crc kubenswrapper[5058]: E1014 08:24:45.792133 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:24:56 crc kubenswrapper[5058]: I1014 08:24:56.791169 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:24:56 crc kubenswrapper[5058]: E1014 08:24:56.792135 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:25:07 crc kubenswrapper[5058]: I1014 08:25:07.790424 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:25:07 crc kubenswrapper[5058]: E1014 08:25:07.791513 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:25:22 crc kubenswrapper[5058]: I1014 08:25:22.797327 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:25:22 crc kubenswrapper[5058]: E1014 08:25:22.798462 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:25:34 crc kubenswrapper[5058]: I1014 08:25:34.790417 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:25:34 crc kubenswrapper[5058]: E1014 08:25:34.791434 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:25:45 crc kubenswrapper[5058]: I1014 08:25:45.790003 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:25:45 crc kubenswrapper[5058]: E1014 08:25:45.791084 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:25:59 crc kubenswrapper[5058]: I1014 08:25:59.789686 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:25:59 crc kubenswrapper[5058]: E1014 08:25:59.790869 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:26:11 crc kubenswrapper[5058]: I1014 08:26:11.790989 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:26:11 crc kubenswrapper[5058]: E1014 08:26:11.792112 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:26:26 crc kubenswrapper[5058]: I1014 08:26:26.790572 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:26:26 crc kubenswrapper[5058]: E1014 08:26:26.792292 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:26:40 crc kubenswrapper[5058]: I1014 08:26:40.790334 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:26:40 crc kubenswrapper[5058]: E1014 08:26:40.791291 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:26:54 crc kubenswrapper[5058]: I1014 08:26:54.790580 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:26:54 crc kubenswrapper[5058]: E1014 08:26:54.791418 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:27:05 crc kubenswrapper[5058]: I1014 08:27:05.790526 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:27:05 crc kubenswrapper[5058]: E1014 08:27:05.791834 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:27:19 crc kubenswrapper[5058]: I1014 08:27:19.790453 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:27:19 crc kubenswrapper[5058]: E1014 08:27:19.791742 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:27:30 crc kubenswrapper[5058]: I1014 08:27:30.790468 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:27:30 crc kubenswrapper[5058]: E1014 08:27:30.791488 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:27:41 crc kubenswrapper[5058]: I1014 08:27:41.789527 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:27:42 crc kubenswrapper[5058]: I1014 08:27:42.821090 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"63b71ae0ce7d41cc173b612b4f5156ec321637a60e39d322fbf3263099111935"} Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.260078 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ql45"] Oct 14 08:28:27 crc kubenswrapper[5058]: E1014 08:28:27.261145 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerName="registry-server" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.261169 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerName="registry-server" Oct 14 08:28:27 crc kubenswrapper[5058]: E1014 08:28:27.261194 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerName="extract-utilities" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.261209 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerName="extract-utilities" Oct 14 08:28:27 crc kubenswrapper[5058]: E1014 08:28:27.261220 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerName="extract-content" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.261234 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerName="extract-content" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.261495 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d410fd-9ff8-451c-ba53-dd9278766aa7" containerName="registry-server" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.263226 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.281828 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ql45"] Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.333723 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2rs\" (UniqueName: \"kubernetes.io/projected/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-kube-api-access-st2rs\") pod \"redhat-operators-6ql45\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.333805 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-utilities\") pod \"redhat-operators-6ql45\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.333864 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-catalog-content\") pod \"redhat-operators-6ql45\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.435696 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2rs\" (UniqueName: \"kubernetes.io/projected/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-kube-api-access-st2rs\") pod \"redhat-operators-6ql45\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.435774 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-utilities\") pod \"redhat-operators-6ql45\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.435851 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-catalog-content\") pod \"redhat-operators-6ql45\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.436498 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-catalog-content\") pod \"redhat-operators-6ql45\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.436541 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-utilities\") pod \"redhat-operators-6ql45\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.459239 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2rs\" (UniqueName: \"kubernetes.io/projected/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-kube-api-access-st2rs\") pod \"redhat-operators-6ql45\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:27 crc kubenswrapper[5058]: I1014 08:28:27.590265 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:28 crc kubenswrapper[5058]: I1014 08:28:28.050261 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ql45"] Oct 14 08:28:28 crc kubenswrapper[5058]: I1014 08:28:28.283499 5058 generic.go:334] "Generic (PLEG): container finished" podID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerID="b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa" exitCode=0 Oct 14 08:28:28 crc kubenswrapper[5058]: I1014 08:28:28.283683 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ql45" event={"ID":"77f485a8-94b8-43c3-a0c1-1fe8c1aab735","Type":"ContainerDied","Data":"b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa"} Oct 14 08:28:28 crc kubenswrapper[5058]: I1014 08:28:28.283749 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ql45" event={"ID":"77f485a8-94b8-43c3-a0c1-1fe8c1aab735","Type":"ContainerStarted","Data":"4a427c3194f47c52e27b385e420a810464b6eb6e00b91525cc4355ef97481b22"} Oct 14 08:28:28 crc kubenswrapper[5058]: I1014 08:28:28.284906 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 08:28:30 crc kubenswrapper[5058]: I1014 08:28:30.314108 5058 generic.go:334] "Generic (PLEG): container finished" podID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerID="2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7" exitCode=0 Oct 14 08:28:30 crc kubenswrapper[5058]: I1014 08:28:30.314613 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ql45" event={"ID":"77f485a8-94b8-43c3-a0c1-1fe8c1aab735","Type":"ContainerDied","Data":"2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7"} Oct 14 08:28:31 crc kubenswrapper[5058]: I1014 08:28:31.327956 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ql45" event={"ID":"77f485a8-94b8-43c3-a0c1-1fe8c1aab735","Type":"ContainerStarted","Data":"c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef"} Oct 14 08:28:31 crc kubenswrapper[5058]: I1014 08:28:31.362481 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ql45" podStartSLOduration=1.875840996 podStartE2EDuration="4.362454324s" podCreationTimestamp="2025-10-14 08:28:27 +0000 UTC" firstStartedPulling="2025-10-14 08:28:28.284699486 +0000 UTC m=+6056.195783292" lastFinishedPulling="2025-10-14 08:28:30.771312814 +0000 UTC m=+6058.682396620" observedRunningTime="2025-10-14 08:28:31.353754157 +0000 UTC m=+6059.264838043" watchObservedRunningTime="2025-10-14 08:28:31.362454324 +0000 UTC m=+6059.273538160" Oct 14 08:28:37 crc kubenswrapper[5058]: I1014 08:28:37.590938 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:37 crc kubenswrapper[5058]: I1014 08:28:37.591970 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:37 crc kubenswrapper[5058]: I1014 08:28:37.671222 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:38 crc kubenswrapper[5058]: I1014 08:28:38.465192 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:38 crc kubenswrapper[5058]: I1014 08:28:38.531136 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ql45"] Oct 14 08:28:40 crc kubenswrapper[5058]: I1014 08:28:40.406779 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6ql45" podUID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerName="registry-server" containerID="cri-o://c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef" gracePeriod=2 Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.365473 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.417863 5058 generic.go:334] "Generic (PLEG): container finished" podID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerID="c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef" exitCode=0 Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.417922 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ql45" event={"ID":"77f485a8-94b8-43c3-a0c1-1fe8c1aab735","Type":"ContainerDied","Data":"c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef"} Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.417940 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ql45" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.417967 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ql45" event={"ID":"77f485a8-94b8-43c3-a0c1-1fe8c1aab735","Type":"ContainerDied","Data":"4a427c3194f47c52e27b385e420a810464b6eb6e00b91525cc4355ef97481b22"} Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.417995 5058 scope.go:117] "RemoveContainer" containerID="c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.441460 5058 scope.go:117] "RemoveContainer" containerID="2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.460471 5058 scope.go:117] "RemoveContainer" containerID="b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.462027 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-utilities\") pod \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.462213 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-catalog-content\") pod \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.462289 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st2rs\" (UniqueName: \"kubernetes.io/projected/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-kube-api-access-st2rs\") pod \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\" (UID: \"77f485a8-94b8-43c3-a0c1-1fe8c1aab735\") " Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.463022 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-utilities" (OuterVolumeSpecName: "utilities") pod "77f485a8-94b8-43c3-a0c1-1fe8c1aab735" (UID: "77f485a8-94b8-43c3-a0c1-1fe8c1aab735"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.467652 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-kube-api-access-st2rs" (OuterVolumeSpecName: "kube-api-access-st2rs") pod "77f485a8-94b8-43c3-a0c1-1fe8c1aab735" (UID: "77f485a8-94b8-43c3-a0c1-1fe8c1aab735"). InnerVolumeSpecName "kube-api-access-st2rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.508798 5058 scope.go:117] "RemoveContainer" containerID="c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef" Oct 14 08:28:41 crc kubenswrapper[5058]: E1014 08:28:41.509195 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef\": container with ID starting with c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef not found: ID does not exist" containerID="c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.509233 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef"} err="failed to get container status \"c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef\": rpc error: code = NotFound desc = could not find container \"c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef\": container with ID starting with c58cebe78c5032f609a6df69584983fdd8a4a3e70b8ced33ffe2355dfed3d3ef not found: ID does not exist" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.509257 5058 scope.go:117] "RemoveContainer" containerID="2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7" Oct 14 08:28:41 crc kubenswrapper[5058]: E1014 08:28:41.509507 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7\": container with ID starting with 2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7 not found: ID does not exist" containerID="2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.509528 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7"} err="failed to get container status \"2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7\": rpc error: code = NotFound desc = could not find container \"2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7\": container with ID starting with 2ad5e1fe8c852983b06c291225d80f9affb5d286e9a1769b6f5c1910179694f7 not found: ID does not exist" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.509542 5058 scope.go:117] "RemoveContainer" containerID="b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa" Oct 14 08:28:41 crc kubenswrapper[5058]: E1014 08:28:41.509733 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa\": container with ID starting with b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa not found: ID does not exist" containerID="b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.509760 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa"} err="failed to get container status \"b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa\": rpc error: code = NotFound desc = could not find container \"b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa\": container with ID starting with b82017518a106f7fe0b81a506030fac97009214c5b19c172a901452b7b6ec5aa not found: ID does not exist" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.542120 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77f485a8-94b8-43c3-a0c1-1fe8c1aab735" (UID: "77f485a8-94b8-43c3-a0c1-1fe8c1aab735"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.563743 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.563777 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st2rs\" (UniqueName: \"kubernetes.io/projected/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-kube-api-access-st2rs\") on node \"crc\" DevicePath \"\"" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.563790 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f485a8-94b8-43c3-a0c1-1fe8c1aab735-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.766330 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ql45"] Oct 14 08:28:41 crc kubenswrapper[5058]: I1014 08:28:41.772521 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6ql45"] Oct 14 08:28:42 crc kubenswrapper[5058]: I1014 08:28:42.806438 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" path="/var/lib/kubelet/pods/77f485a8-94b8-43c3-a0c1-1fe8c1aab735/volumes" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.157762 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd"] Oct 14 08:30:00 crc kubenswrapper[5058]: E1014 08:30:00.158866 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerName="extract-content" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.158891 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerName="extract-content" Oct 14 08:30:00 crc kubenswrapper[5058]: E1014 08:30:00.158918 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerName="extract-utilities" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.158928 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerName="extract-utilities" Oct 14 08:30:00 crc kubenswrapper[5058]: E1014 08:30:00.158963 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerName="registry-server" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.158973 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerName="registry-server" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.159220 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f485a8-94b8-43c3-a0c1-1fe8c1aab735" containerName="registry-server" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.160034 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.162506 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.162641 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.165544 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd"] Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.205728 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2376534f-b530-4fa8-8436-72741d791df5-secret-volume\") pod \"collect-profiles-29340510-h65pd\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.205862 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2376534f-b530-4fa8-8436-72741d791df5-config-volume\") pod \"collect-profiles-29340510-h65pd\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.205898 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf4h8\" (UniqueName: \"kubernetes.io/projected/2376534f-b530-4fa8-8436-72741d791df5-kube-api-access-vf4h8\") pod \"collect-profiles-29340510-h65pd\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.306699 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2376534f-b530-4fa8-8436-72741d791df5-config-volume\") pod \"collect-profiles-29340510-h65pd\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.306750 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf4h8\" (UniqueName: \"kubernetes.io/projected/2376534f-b530-4fa8-8436-72741d791df5-kube-api-access-vf4h8\") pod \"collect-profiles-29340510-h65pd\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.306847 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2376534f-b530-4fa8-8436-72741d791df5-secret-volume\") pod \"collect-profiles-29340510-h65pd\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.309149 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2376534f-b530-4fa8-8436-72741d791df5-config-volume\") pod \"collect-profiles-29340510-h65pd\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.315052 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2376534f-b530-4fa8-8436-72741d791df5-secret-volume\") pod \"collect-profiles-29340510-h65pd\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.324321 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf4h8\" (UniqueName: \"kubernetes.io/projected/2376534f-b530-4fa8-8436-72741d791df5-kube-api-access-vf4h8\") pod \"collect-profiles-29340510-h65pd\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.478367 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:00 crc kubenswrapper[5058]: I1014 08:30:00.949839 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd"] Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.142264 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" event={"ID":"2376534f-b530-4fa8-8436-72741d791df5","Type":"ContainerStarted","Data":"f3f4c9851ea8a268404411371b10a44a7fdd8663537e6091e5ae2a11f23cb01e"} Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.142618 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" event={"ID":"2376534f-b530-4fa8-8436-72741d791df5","Type":"ContainerStarted","Data":"782d7bccd7b2e2cd7baf47df7c0634113d1dd9ac1d67c3388f99e1908db8b9d8"} Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.164548 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" podStartSLOduration=1.164525961 podStartE2EDuration="1.164525961s" podCreationTimestamp="2025-10-14 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:30:01.15990532 +0000 UTC m=+6149.070989146" watchObservedRunningTime="2025-10-14 08:30:01.164525961 +0000 UTC m=+6149.075609767" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.684750 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8z9pm"] Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.686274 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.719222 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8z9pm"] Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.836454 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-utilities\") pod \"community-operators-8z9pm\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.836597 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-catalog-content\") pod \"community-operators-8z9pm\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.836629 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4tcw\" (UniqueName: \"kubernetes.io/projected/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-kube-api-access-s4tcw\") pod \"community-operators-8z9pm\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.937919 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-utilities\") pod \"community-operators-8z9pm\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.938084 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-catalog-content\") pod \"community-operators-8z9pm\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.938439 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-utilities\") pod \"community-operators-8z9pm\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.938459 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-catalog-content\") pod \"community-operators-8z9pm\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.938109 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4tcw\" (UniqueName: \"kubernetes.io/projected/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-kube-api-access-s4tcw\") pod \"community-operators-8z9pm\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:01 crc kubenswrapper[5058]: I1014 08:30:01.968120 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4tcw\" (UniqueName: \"kubernetes.io/projected/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-kube-api-access-s4tcw\") pod \"community-operators-8z9pm\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:02 crc kubenswrapper[5058]: I1014 08:30:02.031095 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:02 crc kubenswrapper[5058]: I1014 08:30:02.155095 5058 generic.go:334] "Generic (PLEG): container finished" podID="2376534f-b530-4fa8-8436-72741d791df5" containerID="f3f4c9851ea8a268404411371b10a44a7fdd8663537e6091e5ae2a11f23cb01e" exitCode=0 Oct 14 08:30:02 crc kubenswrapper[5058]: I1014 08:30:02.155293 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" event={"ID":"2376534f-b530-4fa8-8436-72741d791df5","Type":"ContainerDied","Data":"f3f4c9851ea8a268404411371b10a44a7fdd8663537e6091e5ae2a11f23cb01e"} Oct 14 08:30:02 crc kubenswrapper[5058]: I1014 08:30:02.523985 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8z9pm"] Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.164114 5058 generic.go:334] "Generic (PLEG): container finished" podID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerID="fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4" exitCode=0 Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.164212 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z9pm" event={"ID":"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b","Type":"ContainerDied","Data":"fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4"} Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.164506 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z9pm" event={"ID":"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b","Type":"ContainerStarted","Data":"3f908dd47bb3f83385d076769f90f8fb042d3560393ddfdcaeeb4e41277e4b2d"} Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.472441 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.559586 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2376534f-b530-4fa8-8436-72741d791df5-config-volume\") pod \"2376534f-b530-4fa8-8436-72741d791df5\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.559784 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2376534f-b530-4fa8-8436-72741d791df5-secret-volume\") pod \"2376534f-b530-4fa8-8436-72741d791df5\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.559945 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf4h8\" (UniqueName: \"kubernetes.io/projected/2376534f-b530-4fa8-8436-72741d791df5-kube-api-access-vf4h8\") pod \"2376534f-b530-4fa8-8436-72741d791df5\" (UID: \"2376534f-b530-4fa8-8436-72741d791df5\") " Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.560774 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2376534f-b530-4fa8-8436-72741d791df5-config-volume" (OuterVolumeSpecName: "config-volume") pod "2376534f-b530-4fa8-8436-72741d791df5" (UID: "2376534f-b530-4fa8-8436-72741d791df5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.565201 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2376534f-b530-4fa8-8436-72741d791df5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2376534f-b530-4fa8-8436-72741d791df5" (UID: "2376534f-b530-4fa8-8436-72741d791df5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.568122 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2376534f-b530-4fa8-8436-72741d791df5-kube-api-access-vf4h8" (OuterVolumeSpecName: "kube-api-access-vf4h8") pod "2376534f-b530-4fa8-8436-72741d791df5" (UID: "2376534f-b530-4fa8-8436-72741d791df5"). InnerVolumeSpecName "kube-api-access-vf4h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.656093 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.656156 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.661596 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf4h8\" (UniqueName: \"kubernetes.io/projected/2376534f-b530-4fa8-8436-72741d791df5-kube-api-access-vf4h8\") on node \"crc\" DevicePath \"\"" Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.661641 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2376534f-b530-4fa8-8436-72741d791df5-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 08:30:03 crc kubenswrapper[5058]: I1014 08:30:03.661660 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2376534f-b530-4fa8-8436-72741d791df5-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 08:30:04 crc kubenswrapper[5058]: I1014 08:30:04.174624 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" event={"ID":"2376534f-b530-4fa8-8436-72741d791df5","Type":"ContainerDied","Data":"782d7bccd7b2e2cd7baf47df7c0634113d1dd9ac1d67c3388f99e1908db8b9d8"} Oct 14 08:30:04 crc kubenswrapper[5058]: I1014 08:30:04.174647 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd" Oct 14 08:30:04 crc kubenswrapper[5058]: I1014 08:30:04.174661 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782d7bccd7b2e2cd7baf47df7c0634113d1dd9ac1d67c3388f99e1908db8b9d8" Oct 14 08:30:04 crc kubenswrapper[5058]: I1014 08:30:04.178849 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z9pm" event={"ID":"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b","Type":"ContainerStarted","Data":"549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87"} Oct 14 08:30:04 crc kubenswrapper[5058]: I1014 08:30:04.233041 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw"] Oct 14 08:30:04 crc kubenswrapper[5058]: I1014 08:30:04.239274 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340465-wwkdw"] Oct 14 08:30:04 crc kubenswrapper[5058]: I1014 08:30:04.801763 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae82042-84ed-4477-a677-ea1cba5b70a9" path="/var/lib/kubelet/pods/6ae82042-84ed-4477-a677-ea1cba5b70a9/volumes" Oct 14 08:30:05 crc kubenswrapper[5058]: I1014 08:30:05.188001 5058 generic.go:334] "Generic (PLEG): container finished" podID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerID="549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87" exitCode=0 Oct 14 08:30:05 crc kubenswrapper[5058]: I1014 08:30:05.188051 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z9pm" event={"ID":"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b","Type":"ContainerDied","Data":"549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87"} Oct 14 08:30:06 crc kubenswrapper[5058]: I1014 08:30:06.198900 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z9pm" event={"ID":"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b","Type":"ContainerStarted","Data":"a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2"} Oct 14 08:30:06 crc kubenswrapper[5058]: I1014 08:30:06.221877 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8z9pm" podStartSLOduration=2.619592396 podStartE2EDuration="5.221856341s" podCreationTimestamp="2025-10-14 08:30:01 +0000 UTC" firstStartedPulling="2025-10-14 08:30:03.165829328 +0000 UTC m=+6151.076913134" lastFinishedPulling="2025-10-14 08:30:05.768093233 +0000 UTC m=+6153.679177079" observedRunningTime="2025-10-14 08:30:06.220905004 +0000 UTC m=+6154.131988810" watchObservedRunningTime="2025-10-14 08:30:06.221856341 +0000 UTC m=+6154.132940167" Oct 14 08:30:12 crc kubenswrapper[5058]: I1014 08:30:12.032205 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:12 crc kubenswrapper[5058]: I1014 08:30:12.032726 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:12 crc kubenswrapper[5058]: I1014 08:30:12.113751 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:12 crc kubenswrapper[5058]: I1014 08:30:12.296582 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:12 crc kubenswrapper[5058]: I1014 08:30:12.357714 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8z9pm"] Oct 14 08:30:14 crc kubenswrapper[5058]: I1014 08:30:14.266285 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8z9pm" podUID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerName="registry-server" containerID="cri-o://a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2" gracePeriod=2 Oct 14 08:30:14 crc kubenswrapper[5058]: I1014 08:30:14.817983 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:14 crc kubenswrapper[5058]: I1014 08:30:14.942050 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4tcw\" (UniqueName: \"kubernetes.io/projected/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-kube-api-access-s4tcw\") pod \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " Oct 14 08:30:14 crc kubenswrapper[5058]: I1014 08:30:14.942212 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-catalog-content\") pod \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " Oct 14 08:30:14 crc kubenswrapper[5058]: I1014 08:30:14.942274 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-utilities\") pod \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\" (UID: \"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b\") " Oct 14 08:30:14 crc kubenswrapper[5058]: I1014 08:30:14.944260 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-utilities" (OuterVolumeSpecName: "utilities") pod "cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" (UID: "cd08df6d-294e-4dc6-9beb-7dc2cc0a254b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:30:14 crc kubenswrapper[5058]: I1014 08:30:14.951143 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-kube-api-access-s4tcw" (OuterVolumeSpecName: "kube-api-access-s4tcw") pod "cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" (UID: "cd08df6d-294e-4dc6-9beb-7dc2cc0a254b"). InnerVolumeSpecName "kube-api-access-s4tcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:30:14 crc kubenswrapper[5058]: I1014 08:30:14.999765 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" (UID: "cd08df6d-294e-4dc6-9beb-7dc2cc0a254b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.044721 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4tcw\" (UniqueName: \"kubernetes.io/projected/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-kube-api-access-s4tcw\") on node \"crc\" DevicePath \"\"" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.044812 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.044852 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.277126 5058 generic.go:334] "Generic (PLEG): container finished" podID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerID="a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2" exitCode=0 Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.277179 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z9pm" event={"ID":"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b","Type":"ContainerDied","Data":"a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2"} Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.277221 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z9pm" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.277254 5058 scope.go:117] "RemoveContainer" containerID="a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.277236 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z9pm" event={"ID":"cd08df6d-294e-4dc6-9beb-7dc2cc0a254b","Type":"ContainerDied","Data":"3f908dd47bb3f83385d076769f90f8fb042d3560393ddfdcaeeb4e41277e4b2d"} Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.301450 5058 scope.go:117] "RemoveContainer" containerID="549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.321952 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8z9pm"] Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.326703 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8z9pm"] Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.351547 5058 scope.go:117] "RemoveContainer" containerID="fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.374069 5058 scope.go:117] "RemoveContainer" containerID="a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2" Oct 14 08:30:15 crc kubenswrapper[5058]: E1014 08:30:15.374608 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2\": container with ID starting with a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2 not found: ID does not exist" containerID="a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.374664 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2"} err="failed to get container status \"a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2\": rpc error: code = NotFound desc = could not find container \"a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2\": container with ID starting with a73cde01fb71d8572409ac39bfb8c85fdbe74014e798c0770ab4abc2cd5529a2 not found: ID does not exist" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.374699 5058 scope.go:117] "RemoveContainer" containerID="549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87" Oct 14 08:30:15 crc kubenswrapper[5058]: E1014 08:30:15.375123 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87\": container with ID starting with 549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87 not found: ID does not exist" containerID="549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.375169 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87"} err="failed to get container status \"549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87\": rpc error: code = NotFound desc = could not find container \"549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87\": container with ID starting with 549444752aa8b7bfa52d2477cc0ed9b39a0d0aabfd01e48e0c65799ee6559c87 not found: ID does not exist" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.375197 5058 scope.go:117] "RemoveContainer" containerID="fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4" Oct 14 08:30:15 crc kubenswrapper[5058]: E1014 08:30:15.375677 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4\": container with ID starting with fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4 not found: ID does not exist" containerID="fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4" Oct 14 08:30:15 crc kubenswrapper[5058]: I1014 08:30:15.375744 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4"} err="failed to get container status \"fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4\": rpc error: code = NotFound desc = could not find container \"fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4\": container with ID starting with fe8f209fd0fba5de3bbbe495af62461a773da47e64a7bb2fa07f90ce6d51d8b4 not found: ID does not exist" Oct 14 08:30:16 crc kubenswrapper[5058]: I1014 08:30:16.800564 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" path="/var/lib/kubelet/pods/cd08df6d-294e-4dc6-9beb-7dc2cc0a254b/volumes" Oct 14 08:30:33 crc kubenswrapper[5058]: I1014 08:30:33.656304 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:30:33 crc kubenswrapper[5058]: I1014 08:30:33.657922 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:30:39 crc kubenswrapper[5058]: I1014 08:30:39.927163 5058 scope.go:117] "RemoveContainer" containerID="1df896b0f0917ba7d4d17e56d68f2a463649c05513e56a936a1e1e0ffcb83fea" Oct 14 08:31:03 crc kubenswrapper[5058]: I1014 08:31:03.656914 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:31:03 crc kubenswrapper[5058]: I1014 08:31:03.657898 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:31:03 crc kubenswrapper[5058]: I1014 08:31:03.657987 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:31:03 crc kubenswrapper[5058]: I1014 08:31:03.658904 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63b71ae0ce7d41cc173b612b4f5156ec321637a60e39d322fbf3263099111935"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:31:03 crc kubenswrapper[5058]: I1014 08:31:03.658972 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://63b71ae0ce7d41cc173b612b4f5156ec321637a60e39d322fbf3263099111935" gracePeriod=600 Oct 14 08:31:04 crc kubenswrapper[5058]: I1014 08:31:04.758391 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="63b71ae0ce7d41cc173b612b4f5156ec321637a60e39d322fbf3263099111935" exitCode=0 Oct 14 08:31:04 crc kubenswrapper[5058]: I1014 08:31:04.758453 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"63b71ae0ce7d41cc173b612b4f5156ec321637a60e39d322fbf3263099111935"} Oct 14 08:31:04 crc kubenswrapper[5058]: I1014 08:31:04.759107 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669"} Oct 14 08:31:04 crc kubenswrapper[5058]: I1014 08:31:04.759136 5058 scope.go:117] "RemoveContainer" containerID="c7633ec1682b4b59b9188f5e36e4904af25844c9f4288a1793c930b29c48f447" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.367272 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mwglq"] Oct 14 08:32:06 crc kubenswrapper[5058]: E1014 08:32:06.368434 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerName="extract-utilities" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.368456 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerName="extract-utilities" Oct 14 08:32:06 crc kubenswrapper[5058]: E1014 08:32:06.368506 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerName="extract-content" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.368520 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerName="extract-content" Oct 14 08:32:06 crc kubenswrapper[5058]: E1014 08:32:06.368551 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerName="registry-server" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.368567 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerName="registry-server" Oct 14 08:32:06 crc kubenswrapper[5058]: E1014 08:32:06.368590 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2376534f-b530-4fa8-8436-72741d791df5" containerName="collect-profiles" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.368603 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2376534f-b530-4fa8-8436-72741d791df5" containerName="collect-profiles" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.368909 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd08df6d-294e-4dc6-9beb-7dc2cc0a254b" containerName="registry-server" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.368932 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2376534f-b530-4fa8-8436-72741d791df5" containerName="collect-profiles" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.370848 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.379341 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwglq"] Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.451031 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fwd\" (UniqueName: \"kubernetes.io/projected/cd54c118-4b18-4c8c-b415-d6abaf376a0d-kube-api-access-s9fwd\") pod \"redhat-marketplace-mwglq\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.451132 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-catalog-content\") pod \"redhat-marketplace-mwglq\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.451191 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-utilities\") pod \"redhat-marketplace-mwglq\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.552680 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fwd\" (UniqueName: \"kubernetes.io/projected/cd54c118-4b18-4c8c-b415-d6abaf376a0d-kube-api-access-s9fwd\") pod \"redhat-marketplace-mwglq\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.552756 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-catalog-content\") pod \"redhat-marketplace-mwglq\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.552989 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-utilities\") pod \"redhat-marketplace-mwglq\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.553577 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-utilities\") pod \"redhat-marketplace-mwglq\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.553940 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-catalog-content\") pod \"redhat-marketplace-mwglq\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.575774 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fwd\" (UniqueName: \"kubernetes.io/projected/cd54c118-4b18-4c8c-b415-d6abaf376a0d-kube-api-access-s9fwd\") pod \"redhat-marketplace-mwglq\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.694496 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:06 crc kubenswrapper[5058]: I1014 08:32:06.935162 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwglq"] Oct 14 08:32:06 crc kubenswrapper[5058]: W1014 08:32:06.941685 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd54c118_4b18_4c8c_b415_d6abaf376a0d.slice/crio-05b5ae9cb5c5dfb2e55afdc8499398d3d37a68a5e073c2db62f69ea2896db623 WatchSource:0}: Error finding container 05b5ae9cb5c5dfb2e55afdc8499398d3d37a68a5e073c2db62f69ea2896db623: Status 404 returned error can't find the container with id 05b5ae9cb5c5dfb2e55afdc8499398d3d37a68a5e073c2db62f69ea2896db623 Oct 14 08:32:07 crc kubenswrapper[5058]: I1014 08:32:07.331589 5058 generic.go:334] "Generic (PLEG): container finished" podID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerID="d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e" exitCode=0 Oct 14 08:32:07 crc kubenswrapper[5058]: I1014 08:32:07.331726 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwglq" event={"ID":"cd54c118-4b18-4c8c-b415-d6abaf376a0d","Type":"ContainerDied","Data":"d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e"} Oct 14 08:32:07 crc kubenswrapper[5058]: I1014 08:32:07.332146 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwglq" event={"ID":"cd54c118-4b18-4c8c-b415-d6abaf376a0d","Type":"ContainerStarted","Data":"05b5ae9cb5c5dfb2e55afdc8499398d3d37a68a5e073c2db62f69ea2896db623"} Oct 14 08:32:09 crc kubenswrapper[5058]: I1014 08:32:09.359971 5058 generic.go:334] "Generic (PLEG): container finished" podID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerID="4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025" exitCode=0 Oct 14 08:32:09 crc kubenswrapper[5058]: I1014 08:32:09.360045 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwglq" event={"ID":"cd54c118-4b18-4c8c-b415-d6abaf376a0d","Type":"ContainerDied","Data":"4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025"} Oct 14 08:32:10 crc kubenswrapper[5058]: I1014 08:32:10.374035 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwglq" event={"ID":"cd54c118-4b18-4c8c-b415-d6abaf376a0d","Type":"ContainerStarted","Data":"ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e"} Oct 14 08:32:10 crc kubenswrapper[5058]: I1014 08:32:10.395001 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mwglq" podStartSLOduration=1.716669432 podStartE2EDuration="4.394977501s" podCreationTimestamp="2025-10-14 08:32:06 +0000 UTC" firstStartedPulling="2025-10-14 08:32:07.334325238 +0000 UTC m=+6275.245409074" lastFinishedPulling="2025-10-14 08:32:10.012633307 +0000 UTC m=+6277.923717143" observedRunningTime="2025-10-14 08:32:10.390238046 +0000 UTC m=+6278.301321902" watchObservedRunningTime="2025-10-14 08:32:10.394977501 +0000 UTC m=+6278.306061317" Oct 14 08:32:16 crc kubenswrapper[5058]: I1014 08:32:16.695450 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:16 crc kubenswrapper[5058]: I1014 08:32:16.695984 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:16 crc kubenswrapper[5058]: I1014 08:32:16.773720 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:17 crc kubenswrapper[5058]: I1014 08:32:17.519517 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:17 crc kubenswrapper[5058]: I1014 08:32:17.591924 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwglq"] Oct 14 08:32:19 crc kubenswrapper[5058]: I1014 08:32:19.463858 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mwglq" podUID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerName="registry-server" containerID="cri-o://ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e" gracePeriod=2 Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.025253 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.181191 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-utilities\") pod \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.181250 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-catalog-content\") pod \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.181285 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9fwd\" (UniqueName: \"kubernetes.io/projected/cd54c118-4b18-4c8c-b415-d6abaf376a0d-kube-api-access-s9fwd\") pod \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\" (UID: \"cd54c118-4b18-4c8c-b415-d6abaf376a0d\") " Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.183145 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-utilities" (OuterVolumeSpecName: "utilities") pod "cd54c118-4b18-4c8c-b415-d6abaf376a0d" (UID: "cd54c118-4b18-4c8c-b415-d6abaf376a0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.195152 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd54c118-4b18-4c8c-b415-d6abaf376a0d-kube-api-access-s9fwd" (OuterVolumeSpecName: "kube-api-access-s9fwd") pod "cd54c118-4b18-4c8c-b415-d6abaf376a0d" (UID: "cd54c118-4b18-4c8c-b415-d6abaf376a0d"). InnerVolumeSpecName "kube-api-access-s9fwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.213079 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd54c118-4b18-4c8c-b415-d6abaf376a0d" (UID: "cd54c118-4b18-4c8c-b415-d6abaf376a0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.283542 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.283591 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd54c118-4b18-4c8c-b415-d6abaf376a0d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.283605 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9fwd\" (UniqueName: \"kubernetes.io/projected/cd54c118-4b18-4c8c-b415-d6abaf376a0d-kube-api-access-s9fwd\") on node \"crc\" DevicePath \"\"" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.476924 5058 generic.go:334] "Generic (PLEG): container finished" podID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerID="ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e" exitCode=0 Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.476974 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwglq" event={"ID":"cd54c118-4b18-4c8c-b415-d6abaf376a0d","Type":"ContainerDied","Data":"ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e"} Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.476996 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwglq" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.477024 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwglq" event={"ID":"cd54c118-4b18-4c8c-b415-d6abaf376a0d","Type":"ContainerDied","Data":"05b5ae9cb5c5dfb2e55afdc8499398d3d37a68a5e073c2db62f69ea2896db623"} Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.477052 5058 scope.go:117] "RemoveContainer" containerID="ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.495460 5058 scope.go:117] "RemoveContainer" containerID="4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.509583 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwglq"] Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.515044 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwglq"] Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.538780 5058 scope.go:117] "RemoveContainer" containerID="d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.565088 5058 scope.go:117] "RemoveContainer" containerID="ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e" Oct 14 08:32:20 crc kubenswrapper[5058]: E1014 08:32:20.565828 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e\": container with ID starting with ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e not found: ID does not exist" containerID="ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.565863 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e"} err="failed to get container status \"ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e\": rpc error: code = NotFound desc = could not find container \"ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e\": container with ID starting with ae7cf38c4b9aae493a31f023f39f3683a53a060c624e95f5defad5b62bf6159e not found: ID does not exist" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.565911 5058 scope.go:117] "RemoveContainer" containerID="4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025" Oct 14 08:32:20 crc kubenswrapper[5058]: E1014 08:32:20.566255 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025\": container with ID starting with 4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025 not found: ID does not exist" containerID="4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.566281 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025"} err="failed to get container status \"4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025\": rpc error: code = NotFound desc = could not find container \"4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025\": container with ID starting with 4c18efc6a1efad39d28bd66d1bb53684a264ce78988aab9cf30894c26e911025 not found: ID does not exist" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.566321 5058 scope.go:117] "RemoveContainer" containerID="d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e" Oct 14 08:32:20 crc kubenswrapper[5058]: E1014 08:32:20.566702 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e\": container with ID starting with d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e not found: ID does not exist" containerID="d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.566732 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e"} err="failed to get container status \"d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e\": rpc error: code = NotFound desc = could not find container \"d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e\": container with ID starting with d2a6a755217367566ebffdfc4005db81a8c85ff75025ef81abb8696e98b6752e not found: ID does not exist" Oct 14 08:32:20 crc kubenswrapper[5058]: I1014 08:32:20.804359 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" path="/var/lib/kubelet/pods/cd54c118-4b18-4c8c-b415-d6abaf376a0d/volumes" Oct 14 08:33:03 crc kubenswrapper[5058]: I1014 08:33:03.655845 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:33:03 crc kubenswrapper[5058]: I1014 08:33:03.656512 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:33:33 crc kubenswrapper[5058]: I1014 08:33:33.656666 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:33:33 crc kubenswrapper[5058]: I1014 08:33:33.657693 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:34:03 crc kubenswrapper[5058]: I1014 08:34:03.655853 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:34:03 crc kubenswrapper[5058]: I1014 08:34:03.656682 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:34:03 crc kubenswrapper[5058]: I1014 08:34:03.656755 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:34:03 crc kubenswrapper[5058]: I1014 08:34:03.657754 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:34:03 crc kubenswrapper[5058]: I1014 08:34:03.657892 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" gracePeriod=600 Oct 14 08:34:03 crc kubenswrapper[5058]: E1014 08:34:03.784314 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:34:04 crc kubenswrapper[5058]: I1014 08:34:04.537105 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" exitCode=0 Oct 14 08:34:04 crc kubenswrapper[5058]: I1014 08:34:04.537195 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669"} Oct 14 08:34:04 crc kubenswrapper[5058]: I1014 08:34:04.537601 5058 scope.go:117] "RemoveContainer" containerID="63b71ae0ce7d41cc173b612b4f5156ec321637a60e39d322fbf3263099111935" Oct 14 08:34:04 crc kubenswrapper[5058]: I1014 08:34:04.538362 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:34:04 crc kubenswrapper[5058]: E1014 08:34:04.538850 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:34:19 crc kubenswrapper[5058]: I1014 08:34:19.790084 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:34:19 crc kubenswrapper[5058]: E1014 08:34:19.791037 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:34:32 crc kubenswrapper[5058]: I1014 08:34:32.791006 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:34:32 crc kubenswrapper[5058]: E1014 08:34:32.792003 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:34:44 crc kubenswrapper[5058]: I1014 08:34:44.790064 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:34:44 crc kubenswrapper[5058]: E1014 08:34:44.791223 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:34:55 crc kubenswrapper[5058]: I1014 08:34:55.789833 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:34:55 crc kubenswrapper[5058]: E1014 08:34:55.791152 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:35:06 crc kubenswrapper[5058]: I1014 08:35:06.789863 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:35:06 crc kubenswrapper[5058]: E1014 08:35:06.790726 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:35:18 crc kubenswrapper[5058]: I1014 08:35:18.790030 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:35:18 crc kubenswrapper[5058]: E1014 08:35:18.790882 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.502125 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6bfnh"] Oct 14 08:35:22 crc kubenswrapper[5058]: E1014 08:35:22.503010 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerName="extract-utilities" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.503030 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerName="extract-utilities" Oct 14 08:35:22 crc kubenswrapper[5058]: E1014 08:35:22.503051 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerName="extract-content" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.503063 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerName="extract-content" Oct 14 08:35:22 crc kubenswrapper[5058]: E1014 08:35:22.503121 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerName="registry-server" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.503133 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerName="registry-server" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.503620 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd54c118-4b18-4c8c-b415-d6abaf376a0d" containerName="registry-server" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.505433 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.526036 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bfnh"] Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.705144 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-catalog-content\") pod \"certified-operators-6bfnh\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.705266 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8m62\" (UniqueName: \"kubernetes.io/projected/e7509504-f5d1-4d52-bd63-96ec256beeb2-kube-api-access-g8m62\") pod \"certified-operators-6bfnh\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.705308 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-utilities\") pod \"certified-operators-6bfnh\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.806902 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8m62\" (UniqueName: \"kubernetes.io/projected/e7509504-f5d1-4d52-bd63-96ec256beeb2-kube-api-access-g8m62\") pod \"certified-operators-6bfnh\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.807013 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-utilities\") pod \"certified-operators-6bfnh\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.807120 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-catalog-content\") pod \"certified-operators-6bfnh\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.808009 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-utilities\") pod \"certified-operators-6bfnh\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.808070 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-catalog-content\") pod \"certified-operators-6bfnh\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.838185 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8m62\" (UniqueName: \"kubernetes.io/projected/e7509504-f5d1-4d52-bd63-96ec256beeb2-kube-api-access-g8m62\") pod \"certified-operators-6bfnh\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:22 crc kubenswrapper[5058]: I1014 08:35:22.839820 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:23 crc kubenswrapper[5058]: I1014 08:35:23.350590 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bfnh"] Oct 14 08:35:23 crc kubenswrapper[5058]: W1014 08:35:23.368532 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7509504_f5d1_4d52_bd63_96ec256beeb2.slice/crio-53741905ad2f18c0f16214a3acfe6b39df0c3f21037b0eed01e763a8ddf05444 WatchSource:0}: Error finding container 53741905ad2f18c0f16214a3acfe6b39df0c3f21037b0eed01e763a8ddf05444: Status 404 returned error can't find the container with id 53741905ad2f18c0f16214a3acfe6b39df0c3f21037b0eed01e763a8ddf05444 Oct 14 08:35:24 crc kubenswrapper[5058]: I1014 08:35:24.340942 5058 generic.go:334] "Generic (PLEG): container finished" podID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerID="80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829" exitCode=0 Oct 14 08:35:24 crc kubenswrapper[5058]: I1014 08:35:24.341058 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bfnh" event={"ID":"e7509504-f5d1-4d52-bd63-96ec256beeb2","Type":"ContainerDied","Data":"80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829"} Oct 14 08:35:24 crc kubenswrapper[5058]: I1014 08:35:24.341367 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bfnh" event={"ID":"e7509504-f5d1-4d52-bd63-96ec256beeb2","Type":"ContainerStarted","Data":"53741905ad2f18c0f16214a3acfe6b39df0c3f21037b0eed01e763a8ddf05444"} Oct 14 08:35:24 crc kubenswrapper[5058]: I1014 08:35:24.346887 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 08:35:26 crc kubenswrapper[5058]: I1014 08:35:26.365578 5058 generic.go:334] "Generic (PLEG): container finished" podID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerID="14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8" exitCode=0 Oct 14 08:35:26 crc kubenswrapper[5058]: I1014 08:35:26.365715 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bfnh" event={"ID":"e7509504-f5d1-4d52-bd63-96ec256beeb2","Type":"ContainerDied","Data":"14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8"} Oct 14 08:35:27 crc kubenswrapper[5058]: I1014 08:35:27.379950 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bfnh" event={"ID":"e7509504-f5d1-4d52-bd63-96ec256beeb2","Type":"ContainerStarted","Data":"a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b"} Oct 14 08:35:27 crc kubenswrapper[5058]: I1014 08:35:27.421092 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6bfnh" podStartSLOduration=2.98824683 podStartE2EDuration="5.421063554s" podCreationTimestamp="2025-10-14 08:35:22 +0000 UTC" firstStartedPulling="2025-10-14 08:35:24.345534569 +0000 UTC m=+6472.256618415" lastFinishedPulling="2025-10-14 08:35:26.778351323 +0000 UTC m=+6474.689435139" observedRunningTime="2025-10-14 08:35:27.410897326 +0000 UTC m=+6475.321981202" watchObservedRunningTime="2025-10-14 08:35:27.421063554 +0000 UTC m=+6475.332147370" Oct 14 08:35:29 crc kubenswrapper[5058]: I1014 08:35:29.790707 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:35:29 crc kubenswrapper[5058]: E1014 08:35:29.791430 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:35:32 crc kubenswrapper[5058]: I1014 08:35:32.840888 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:32 crc kubenswrapper[5058]: I1014 08:35:32.840935 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:32 crc kubenswrapper[5058]: I1014 08:35:32.911481 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:33 crc kubenswrapper[5058]: I1014 08:35:33.497931 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:33 crc kubenswrapper[5058]: I1014 08:35:33.556252 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bfnh"] Oct 14 08:35:35 crc kubenswrapper[5058]: I1014 08:35:35.448202 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6bfnh" podUID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerName="registry-server" containerID="cri-o://a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b" gracePeriod=2 Oct 14 08:35:35 crc kubenswrapper[5058]: I1014 08:35:35.957170 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.149254 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-utilities\") pod \"e7509504-f5d1-4d52-bd63-96ec256beeb2\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.149349 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8m62\" (UniqueName: \"kubernetes.io/projected/e7509504-f5d1-4d52-bd63-96ec256beeb2-kube-api-access-g8m62\") pod \"e7509504-f5d1-4d52-bd63-96ec256beeb2\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.149388 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-catalog-content\") pod \"e7509504-f5d1-4d52-bd63-96ec256beeb2\" (UID: \"e7509504-f5d1-4d52-bd63-96ec256beeb2\") " Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.150913 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-utilities" (OuterVolumeSpecName: "utilities") pod "e7509504-f5d1-4d52-bd63-96ec256beeb2" (UID: "e7509504-f5d1-4d52-bd63-96ec256beeb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.164023 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7509504-f5d1-4d52-bd63-96ec256beeb2-kube-api-access-g8m62" (OuterVolumeSpecName: "kube-api-access-g8m62") pod "e7509504-f5d1-4d52-bd63-96ec256beeb2" (UID: "e7509504-f5d1-4d52-bd63-96ec256beeb2"). InnerVolumeSpecName "kube-api-access-g8m62". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.213233 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7509504-f5d1-4d52-bd63-96ec256beeb2" (UID: "e7509504-f5d1-4d52-bd63-96ec256beeb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.251168 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8m62\" (UniqueName: \"kubernetes.io/projected/e7509504-f5d1-4d52-bd63-96ec256beeb2-kube-api-access-g8m62\") on node \"crc\" DevicePath \"\"" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.251206 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.251222 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7509504-f5d1-4d52-bd63-96ec256beeb2-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.464314 5058 generic.go:334] "Generic (PLEG): container finished" podID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerID="a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b" exitCode=0 Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.464404 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bfnh" event={"ID":"e7509504-f5d1-4d52-bd63-96ec256beeb2","Type":"ContainerDied","Data":"a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b"} Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.464447 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bfnh" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.464484 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bfnh" event={"ID":"e7509504-f5d1-4d52-bd63-96ec256beeb2","Type":"ContainerDied","Data":"53741905ad2f18c0f16214a3acfe6b39df0c3f21037b0eed01e763a8ddf05444"} Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.464516 5058 scope.go:117] "RemoveContainer" containerID="a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.500357 5058 scope.go:117] "RemoveContainer" containerID="14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.520380 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bfnh"] Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.532424 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6bfnh"] Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.547710 5058 scope.go:117] "RemoveContainer" containerID="80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.570226 5058 scope.go:117] "RemoveContainer" containerID="a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b" Oct 14 08:35:36 crc kubenswrapper[5058]: E1014 08:35:36.570645 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b\": container with ID starting with a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b not found: ID does not exist" containerID="a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.570704 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b"} err="failed to get container status \"a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b\": rpc error: code = NotFound desc = could not find container \"a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b\": container with ID starting with a388a11fea291fd80a9650aa62f260dc04e067603304e6f2976652b705631d6b not found: ID does not exist" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.570727 5058 scope.go:117] "RemoveContainer" containerID="14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8" Oct 14 08:35:36 crc kubenswrapper[5058]: E1014 08:35:36.571106 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8\": container with ID starting with 14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8 not found: ID does not exist" containerID="14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.571155 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8"} err="failed to get container status \"14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8\": rpc error: code = NotFound desc = could not find container \"14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8\": container with ID starting with 14da514b85e0473e87def7ddbda4704e0185d6cbe62559012cab2a984ce09bc8 not found: ID does not exist" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.571171 5058 scope.go:117] "RemoveContainer" containerID="80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829" Oct 14 08:35:36 crc kubenswrapper[5058]: E1014 08:35:36.571594 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829\": container with ID starting with 80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829 not found: ID does not exist" containerID="80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.571659 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829"} err="failed to get container status \"80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829\": rpc error: code = NotFound desc = could not find container \"80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829\": container with ID starting with 80e98f334f9f1bdbd24a9c1197e95abcb29e6057e8221b3f0cc8447ec30a2829 not found: ID does not exist" Oct 14 08:35:36 crc kubenswrapper[5058]: I1014 08:35:36.809552 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7509504-f5d1-4d52-bd63-96ec256beeb2" path="/var/lib/kubelet/pods/e7509504-f5d1-4d52-bd63-96ec256beeb2/volumes" Oct 14 08:35:40 crc kubenswrapper[5058]: I1014 08:35:40.790928 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:35:40 crc kubenswrapper[5058]: E1014 08:35:40.791836 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:35:53 crc kubenswrapper[5058]: I1014 08:35:53.791192 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:35:53 crc kubenswrapper[5058]: E1014 08:35:53.792255 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:36:05 crc kubenswrapper[5058]: I1014 08:36:05.789711 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:36:05 crc kubenswrapper[5058]: E1014 08:36:05.790571 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:36:20 crc kubenswrapper[5058]: I1014 08:36:20.789904 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:36:20 crc kubenswrapper[5058]: E1014 08:36:20.790955 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:36:31 crc kubenswrapper[5058]: I1014 08:36:31.790556 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:36:31 crc kubenswrapper[5058]: E1014 08:36:31.791689 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:36:44 crc kubenswrapper[5058]: I1014 08:36:44.790855 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:36:44 crc kubenswrapper[5058]: E1014 08:36:44.792099 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:36:56 crc kubenswrapper[5058]: I1014 08:36:56.789637 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:36:56 crc kubenswrapper[5058]: E1014 08:36:56.790474 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:37:08 crc kubenswrapper[5058]: I1014 08:37:08.790067 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:37:08 crc kubenswrapper[5058]: E1014 08:37:08.791032 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:37:23 crc kubenswrapper[5058]: I1014 08:37:23.791068 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:37:23 crc kubenswrapper[5058]: E1014 08:37:23.792192 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:37:36 crc kubenswrapper[5058]: I1014 08:37:36.790499 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:37:36 crc kubenswrapper[5058]: E1014 08:37:36.791430 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:37:48 crc kubenswrapper[5058]: I1014 08:37:48.790002 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:37:48 crc kubenswrapper[5058]: E1014 08:37:48.791060 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:38:00 crc kubenswrapper[5058]: I1014 08:38:00.790157 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:38:00 crc kubenswrapper[5058]: E1014 08:38:00.791323 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:38:15 crc kubenswrapper[5058]: I1014 08:38:15.791009 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:38:15 crc kubenswrapper[5058]: E1014 08:38:15.792782 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:38:28 crc kubenswrapper[5058]: I1014 08:38:28.790302 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:38:28 crc kubenswrapper[5058]: E1014 08:38:28.790877 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.810318 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mwkbf"] Oct 14 08:38:40 crc kubenswrapper[5058]: E1014 08:38:40.811406 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerName="extract-content" Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.811427 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerName="extract-content" Oct 14 08:38:40 crc kubenswrapper[5058]: E1014 08:38:40.811464 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerName="registry-server" Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.811476 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerName="registry-server" Oct 14 08:38:40 crc kubenswrapper[5058]: E1014 08:38:40.811509 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerName="extract-utilities" Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.811523 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerName="extract-utilities" Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.811753 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7509504-f5d1-4d52-bd63-96ec256beeb2" containerName="registry-server" Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.813740 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.816435 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mwkbf"] Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.926925 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-utilities\") pod \"redhat-operators-mwkbf\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.926997 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z65w\" (UniqueName: \"kubernetes.io/projected/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-kube-api-access-8z65w\") pod \"redhat-operators-mwkbf\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:40 crc kubenswrapper[5058]: I1014 08:38:40.927193 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-catalog-content\") pod \"redhat-operators-mwkbf\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:41 crc kubenswrapper[5058]: I1014 08:38:41.028265 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-catalog-content\") pod \"redhat-operators-mwkbf\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:41 crc kubenswrapper[5058]: I1014 08:38:41.028490 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-utilities\") pod \"redhat-operators-mwkbf\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:41 crc kubenswrapper[5058]: I1014 08:38:41.028550 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z65w\" (UniqueName: \"kubernetes.io/projected/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-kube-api-access-8z65w\") pod \"redhat-operators-mwkbf\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:41 crc kubenswrapper[5058]: I1014 08:38:41.029102 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-catalog-content\") pod \"redhat-operators-mwkbf\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:41 crc kubenswrapper[5058]: I1014 08:38:41.029112 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-utilities\") pod \"redhat-operators-mwkbf\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:41 crc kubenswrapper[5058]: I1014 08:38:41.057858 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z65w\" (UniqueName: \"kubernetes.io/projected/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-kube-api-access-8z65w\") pod \"redhat-operators-mwkbf\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:41 crc kubenswrapper[5058]: I1014 08:38:41.147221 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:41 crc kubenswrapper[5058]: I1014 08:38:41.578208 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mwkbf"] Oct 14 08:38:42 crc kubenswrapper[5058]: I1014 08:38:42.205172 5058 generic.go:334] "Generic (PLEG): container finished" podID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerID="19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce" exitCode=0 Oct 14 08:38:42 crc kubenswrapper[5058]: I1014 08:38:42.205229 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwkbf" event={"ID":"ebbbdff0-4171-4521-9860-ea04e0a3fdf4","Type":"ContainerDied","Data":"19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce"} Oct 14 08:38:42 crc kubenswrapper[5058]: I1014 08:38:42.205285 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwkbf" event={"ID":"ebbbdff0-4171-4521-9860-ea04e0a3fdf4","Type":"ContainerStarted","Data":"f8e7a3d9a714823127dbfec9cfd68d9c1eab705dfbce981acdbe7e62bebc0d9f"} Oct 14 08:38:43 crc kubenswrapper[5058]: I1014 08:38:43.217358 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwkbf" event={"ID":"ebbbdff0-4171-4521-9860-ea04e0a3fdf4","Type":"ContainerStarted","Data":"afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2"} Oct 14 08:38:43 crc kubenswrapper[5058]: I1014 08:38:43.789163 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:38:43 crc kubenswrapper[5058]: E1014 08:38:43.789415 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:38:44 crc kubenswrapper[5058]: I1014 08:38:44.230998 5058 generic.go:334] "Generic (PLEG): container finished" podID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerID="afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2" exitCode=0 Oct 14 08:38:44 crc kubenswrapper[5058]: I1014 08:38:44.231067 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwkbf" event={"ID":"ebbbdff0-4171-4521-9860-ea04e0a3fdf4","Type":"ContainerDied","Data":"afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2"} Oct 14 08:38:45 crc kubenswrapper[5058]: I1014 08:38:45.243787 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwkbf" event={"ID":"ebbbdff0-4171-4521-9860-ea04e0a3fdf4","Type":"ContainerStarted","Data":"931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae"} Oct 14 08:38:45 crc kubenswrapper[5058]: I1014 08:38:45.274855 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mwkbf" podStartSLOduration=2.708701129 podStartE2EDuration="5.27483169s" podCreationTimestamp="2025-10-14 08:38:40 +0000 UTC" firstStartedPulling="2025-10-14 08:38:42.206718075 +0000 UTC m=+6670.117801891" lastFinishedPulling="2025-10-14 08:38:44.772848606 +0000 UTC m=+6672.683932452" observedRunningTime="2025-10-14 08:38:45.265179886 +0000 UTC m=+6673.176263762" watchObservedRunningTime="2025-10-14 08:38:45.27483169 +0000 UTC m=+6673.185915536" Oct 14 08:38:51 crc kubenswrapper[5058]: I1014 08:38:51.148196 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:51 crc kubenswrapper[5058]: I1014 08:38:51.149007 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:51 crc kubenswrapper[5058]: I1014 08:38:51.224332 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:51 crc kubenswrapper[5058]: I1014 08:38:51.377070 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:51 crc kubenswrapper[5058]: I1014 08:38:51.473210 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mwkbf"] Oct 14 08:38:53 crc kubenswrapper[5058]: I1014 08:38:53.318209 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mwkbf" podUID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerName="registry-server" containerID="cri-o://931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae" gracePeriod=2 Oct 14 08:38:53 crc kubenswrapper[5058]: I1014 08:38:53.771510 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:53 crc kubenswrapper[5058]: I1014 08:38:53.929438 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-catalog-content\") pod \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " Oct 14 08:38:53 crc kubenswrapper[5058]: I1014 08:38:53.929819 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-utilities\") pod \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " Oct 14 08:38:53 crc kubenswrapper[5058]: I1014 08:38:53.930000 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z65w\" (UniqueName: \"kubernetes.io/projected/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-kube-api-access-8z65w\") pod \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\" (UID: \"ebbbdff0-4171-4521-9860-ea04e0a3fdf4\") " Oct 14 08:38:53 crc kubenswrapper[5058]: I1014 08:38:53.931252 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-utilities" (OuterVolumeSpecName: "utilities") pod "ebbbdff0-4171-4521-9860-ea04e0a3fdf4" (UID: "ebbbdff0-4171-4521-9860-ea04e0a3fdf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:38:53 crc kubenswrapper[5058]: I1014 08:38:53.939249 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-kube-api-access-8z65w" (OuterVolumeSpecName: "kube-api-access-8z65w") pod "ebbbdff0-4171-4521-9860-ea04e0a3fdf4" (UID: "ebbbdff0-4171-4521-9860-ea04e0a3fdf4"). InnerVolumeSpecName "kube-api-access-8z65w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.035358 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z65w\" (UniqueName: \"kubernetes.io/projected/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-kube-api-access-8z65w\") on node \"crc\" DevicePath \"\"" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.035437 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.327615 5058 generic.go:334] "Generic (PLEG): container finished" podID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerID="931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae" exitCode=0 Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.327661 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwkbf" event={"ID":"ebbbdff0-4171-4521-9860-ea04e0a3fdf4","Type":"ContainerDied","Data":"931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae"} Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.327699 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mwkbf" event={"ID":"ebbbdff0-4171-4521-9860-ea04e0a3fdf4","Type":"ContainerDied","Data":"f8e7a3d9a714823127dbfec9cfd68d9c1eab705dfbce981acdbe7e62bebc0d9f"} Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.327704 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mwkbf" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.327734 5058 scope.go:117] "RemoveContainer" containerID="931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.349863 5058 scope.go:117] "RemoveContainer" containerID="afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.371001 5058 scope.go:117] "RemoveContainer" containerID="19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.389275 5058 scope.go:117] "RemoveContainer" containerID="931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae" Oct 14 08:38:54 crc kubenswrapper[5058]: E1014 08:38:54.390193 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae\": container with ID starting with 931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae not found: ID does not exist" containerID="931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.390224 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae"} err="failed to get container status \"931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae\": rpc error: code = NotFound desc = could not find container \"931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae\": container with ID starting with 931a4399336f1210eb899350f7021062d1c02ff1e561029c53f691646450b3ae not found: ID does not exist" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.390250 5058 scope.go:117] "RemoveContainer" containerID="afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2" Oct 14 08:38:54 crc kubenswrapper[5058]: E1014 08:38:54.390972 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2\": container with ID starting with afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2 not found: ID does not exist" containerID="afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.390995 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2"} err="failed to get container status \"afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2\": rpc error: code = NotFound desc = could not find container \"afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2\": container with ID starting with afd357c8f4b6a88c11140aa8010abcfd71f5a53ce9b0917fc563052fdfbf50f2 not found: ID does not exist" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.391020 5058 scope.go:117] "RemoveContainer" containerID="19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce" Oct 14 08:38:54 crc kubenswrapper[5058]: E1014 08:38:54.391589 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce\": container with ID starting with 19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce not found: ID does not exist" containerID="19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.391615 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce"} err="failed to get container status \"19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce\": rpc error: code = NotFound desc = could not find container \"19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce\": container with ID starting with 19b2daeab71d7b9398f96040945cf3317b4f3e646b2a7465cedc936d78d8f8ce not found: ID does not exist" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.792207 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebbbdff0-4171-4521-9860-ea04e0a3fdf4" (UID: "ebbbdff0-4171-4521-9860-ea04e0a3fdf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.858032 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebbbdff0-4171-4521-9860-ea04e0a3fdf4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.949534 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mwkbf"] Oct 14 08:38:54 crc kubenswrapper[5058]: I1014 08:38:54.971765 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mwkbf"] Oct 14 08:38:56 crc kubenswrapper[5058]: I1014 08:38:56.804620 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" path="/var/lib/kubelet/pods/ebbbdff0-4171-4521-9860-ea04e0a3fdf4/volumes" Oct 14 08:38:57 crc kubenswrapper[5058]: I1014 08:38:57.791199 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:38:57 crc kubenswrapper[5058]: E1014 08:38:57.791748 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:39:08 crc kubenswrapper[5058]: I1014 08:39:08.790081 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:39:09 crc kubenswrapper[5058]: I1014 08:39:09.469636 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"2a43b2198075bda9aa78711e34ddde57e7bb3c93314261f5c8037e73fbe5bf8e"} Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.632557 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tp2l7"] Oct 14 08:40:42 crc kubenswrapper[5058]: E1014 08:40:42.633375 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerName="extract-content" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.633387 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerName="extract-content" Oct 14 08:40:42 crc kubenswrapper[5058]: E1014 08:40:42.633410 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerName="registry-server" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.633416 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerName="registry-server" Oct 14 08:40:42 crc kubenswrapper[5058]: E1014 08:40:42.633425 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerName="extract-utilities" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.633431 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerName="extract-utilities" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.633572 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbbdff0-4171-4521-9860-ea04e0a3fdf4" containerName="registry-server" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.634503 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.649602 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tp2l7"] Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.724268 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-utilities\") pod \"community-operators-tp2l7\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.724349 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx944\" (UniqueName: \"kubernetes.io/projected/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-kube-api-access-fx944\") pod \"community-operators-tp2l7\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.724384 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-catalog-content\") pod \"community-operators-tp2l7\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.826211 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx944\" (UniqueName: \"kubernetes.io/projected/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-kube-api-access-fx944\") pod \"community-operators-tp2l7\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.826262 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-catalog-content\") pod \"community-operators-tp2l7\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.826329 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-utilities\") pod \"community-operators-tp2l7\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.826749 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-catalog-content\") pod \"community-operators-tp2l7\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.826856 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-utilities\") pod \"community-operators-tp2l7\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.849595 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx944\" (UniqueName: \"kubernetes.io/projected/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-kube-api-access-fx944\") pod \"community-operators-tp2l7\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:42 crc kubenswrapper[5058]: I1014 08:40:42.956034 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:43 crc kubenswrapper[5058]: I1014 08:40:43.400402 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tp2l7"] Oct 14 08:40:44 crc kubenswrapper[5058]: I1014 08:40:44.371011 5058 generic.go:334] "Generic (PLEG): container finished" podID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerID="b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773" exitCode=0 Oct 14 08:40:44 crc kubenswrapper[5058]: I1014 08:40:44.371142 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp2l7" event={"ID":"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0","Type":"ContainerDied","Data":"b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773"} Oct 14 08:40:44 crc kubenswrapper[5058]: I1014 08:40:44.371328 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp2l7" event={"ID":"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0","Type":"ContainerStarted","Data":"adf6c05e048a0b79e69c54d3079212e2bab84b1dbb24c0d444eec81a6f1bb606"} Oct 14 08:40:44 crc kubenswrapper[5058]: I1014 08:40:44.374859 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 08:40:45 crc kubenswrapper[5058]: I1014 08:40:45.381945 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp2l7" event={"ID":"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0","Type":"ContainerStarted","Data":"8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0"} Oct 14 08:40:46 crc kubenswrapper[5058]: I1014 08:40:46.389892 5058 generic.go:334] "Generic (PLEG): container finished" podID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerID="8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0" exitCode=0 Oct 14 08:40:46 crc kubenswrapper[5058]: I1014 08:40:46.389932 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp2l7" event={"ID":"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0","Type":"ContainerDied","Data":"8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0"} Oct 14 08:40:47 crc kubenswrapper[5058]: I1014 08:40:47.403367 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp2l7" event={"ID":"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0","Type":"ContainerStarted","Data":"5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912"} Oct 14 08:40:47 crc kubenswrapper[5058]: I1014 08:40:47.437349 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tp2l7" podStartSLOduration=2.772975625 podStartE2EDuration="5.437319104s" podCreationTimestamp="2025-10-14 08:40:42 +0000 UTC" firstStartedPulling="2025-10-14 08:40:44.374302627 +0000 UTC m=+6792.285386473" lastFinishedPulling="2025-10-14 08:40:47.038646116 +0000 UTC m=+6794.949729952" observedRunningTime="2025-10-14 08:40:47.430750541 +0000 UTC m=+6795.341834377" watchObservedRunningTime="2025-10-14 08:40:47.437319104 +0000 UTC m=+6795.348402940" Oct 14 08:40:52 crc kubenswrapper[5058]: I1014 08:40:52.957560 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:52 crc kubenswrapper[5058]: I1014 08:40:52.958064 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:53 crc kubenswrapper[5058]: I1014 08:40:53.029528 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:53 crc kubenswrapper[5058]: I1014 08:40:53.534994 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:53 crc kubenswrapper[5058]: I1014 08:40:53.598327 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tp2l7"] Oct 14 08:40:55 crc kubenswrapper[5058]: I1014 08:40:55.474738 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tp2l7" podUID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerName="registry-server" containerID="cri-o://5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912" gracePeriod=2 Oct 14 08:40:55 crc kubenswrapper[5058]: I1014 08:40:55.954521 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.023908 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx944\" (UniqueName: \"kubernetes.io/projected/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-kube-api-access-fx944\") pod \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.024008 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-catalog-content\") pod \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.024048 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-utilities\") pod \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\" (UID: \"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0\") " Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.025697 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-utilities" (OuterVolumeSpecName: "utilities") pod "8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" (UID: "8e28ac7f-6b0b-460e-9407-9e0e4d5850c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.030358 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-kube-api-access-fx944" (OuterVolumeSpecName: "kube-api-access-fx944") pod "8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" (UID: "8e28ac7f-6b0b-460e-9407-9e0e4d5850c0"). InnerVolumeSpecName "kube-api-access-fx944". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.093714 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" (UID: "8e28ac7f-6b0b-460e-9407-9e0e4d5850c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.125268 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.125333 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx944\" (UniqueName: \"kubernetes.io/projected/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-kube-api-access-fx944\") on node \"crc\" DevicePath \"\"" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.125356 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.487326 5058 generic.go:334] "Generic (PLEG): container finished" podID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerID="5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912" exitCode=0 Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.487380 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp2l7" event={"ID":"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0","Type":"ContainerDied","Data":"5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912"} Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.487740 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp2l7" event={"ID":"8e28ac7f-6b0b-460e-9407-9e0e4d5850c0","Type":"ContainerDied","Data":"adf6c05e048a0b79e69c54d3079212e2bab84b1dbb24c0d444eec81a6f1bb606"} Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.487762 5058 scope.go:117] "RemoveContainer" containerID="5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.487422 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tp2l7" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.515902 5058 scope.go:117] "RemoveContainer" containerID="8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.531907 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tp2l7"] Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.536693 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tp2l7"] Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.559929 5058 scope.go:117] "RemoveContainer" containerID="b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.580068 5058 scope.go:117] "RemoveContainer" containerID="5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912" Oct 14 08:40:56 crc kubenswrapper[5058]: E1014 08:40:56.580475 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912\": container with ID starting with 5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912 not found: ID does not exist" containerID="5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.580520 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912"} err="failed to get container status \"5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912\": rpc error: code = NotFound desc = could not find container \"5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912\": container with ID starting with 5afe7d8dabf967d32fb74aba6cbb10d240dbfdd8825653b8990d90ec22c0d912 not found: ID does not exist" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.580548 5058 scope.go:117] "RemoveContainer" containerID="8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0" Oct 14 08:40:56 crc kubenswrapper[5058]: E1014 08:40:56.580878 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0\": container with ID starting with 8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0 not found: ID does not exist" containerID="8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.580920 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0"} err="failed to get container status \"8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0\": rpc error: code = NotFound desc = could not find container \"8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0\": container with ID starting with 8eb5156b6ec5232e25eaa7e850cd7d1b85e96a3b443d3a1021b3054b436186f0 not found: ID does not exist" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.580948 5058 scope.go:117] "RemoveContainer" containerID="b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773" Oct 14 08:40:56 crc kubenswrapper[5058]: E1014 08:40:56.581194 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773\": container with ID starting with b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773 not found: ID does not exist" containerID="b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.581229 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773"} err="failed to get container status \"b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773\": rpc error: code = NotFound desc = could not find container \"b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773\": container with ID starting with b888c81eeb52b2dddede1cb59aaf566d48e1784ad5626251179e1fd9a11b2773 not found: ID does not exist" Oct 14 08:40:56 crc kubenswrapper[5058]: I1014 08:40:56.806762 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" path="/var/lib/kubelet/pods/8e28ac7f-6b0b-460e-9407-9e0e4d5850c0/volumes" Oct 14 08:41:33 crc kubenswrapper[5058]: I1014 08:41:33.656600 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:41:33 crc kubenswrapper[5058]: I1014 08:41:33.658019 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:42:03 crc kubenswrapper[5058]: I1014 08:42:03.656017 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:42:03 crc kubenswrapper[5058]: I1014 08:42:03.657753 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.773603 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9nfm9"] Oct 14 08:42:27 crc kubenswrapper[5058]: E1014 08:42:27.774578 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerName="extract-utilities" Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.774594 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerName="extract-utilities" Oct 14 08:42:27 crc kubenswrapper[5058]: E1014 08:42:27.774608 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerName="registry-server" Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.774616 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerName="registry-server" Oct 14 08:42:27 crc kubenswrapper[5058]: E1014 08:42:27.774631 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerName="extract-content" Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.774638 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerName="extract-content" Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.774865 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e28ac7f-6b0b-460e-9407-9e0e4d5850c0" containerName="registry-server" Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.776047 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.802833 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nfm9"] Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.943701 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x57nb\" (UniqueName: \"kubernetes.io/projected/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-kube-api-access-x57nb\") pod \"redhat-marketplace-9nfm9\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.943842 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-catalog-content\") pod \"redhat-marketplace-9nfm9\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:27 crc kubenswrapper[5058]: I1014 08:42:27.943926 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-utilities\") pod \"redhat-marketplace-9nfm9\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:28 crc kubenswrapper[5058]: I1014 08:42:28.045846 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x57nb\" (UniqueName: \"kubernetes.io/projected/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-kube-api-access-x57nb\") pod \"redhat-marketplace-9nfm9\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:28 crc kubenswrapper[5058]: I1014 08:42:28.045907 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-catalog-content\") pod \"redhat-marketplace-9nfm9\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:28 crc kubenswrapper[5058]: I1014 08:42:28.045937 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-utilities\") pod \"redhat-marketplace-9nfm9\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:28 crc kubenswrapper[5058]: I1014 08:42:28.046416 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-utilities\") pod \"redhat-marketplace-9nfm9\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:28 crc kubenswrapper[5058]: I1014 08:42:28.046534 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-catalog-content\") pod \"redhat-marketplace-9nfm9\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:28 crc kubenswrapper[5058]: I1014 08:42:28.076313 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x57nb\" (UniqueName: \"kubernetes.io/projected/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-kube-api-access-x57nb\") pod \"redhat-marketplace-9nfm9\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:28 crc kubenswrapper[5058]: I1014 08:42:28.111354 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:28 crc kubenswrapper[5058]: I1014 08:42:28.536442 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nfm9"] Oct 14 08:42:29 crc kubenswrapper[5058]: I1014 08:42:29.382354 5058 generic.go:334] "Generic (PLEG): container finished" podID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerID="777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31" exitCode=0 Oct 14 08:42:29 crc kubenswrapper[5058]: I1014 08:42:29.382447 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nfm9" event={"ID":"1f12e81a-15ee-49d4-a21a-89ca8a1ed438","Type":"ContainerDied","Data":"777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31"} Oct 14 08:42:29 crc kubenswrapper[5058]: I1014 08:42:29.382528 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nfm9" event={"ID":"1f12e81a-15ee-49d4-a21a-89ca8a1ed438","Type":"ContainerStarted","Data":"2e0fb94913c67b6f3111f407052209dd838524aec02e5a991a7b03001d4c76ff"} Oct 14 08:42:30 crc kubenswrapper[5058]: I1014 08:42:30.396327 5058 generic.go:334] "Generic (PLEG): container finished" podID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerID="a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af" exitCode=0 Oct 14 08:42:30 crc kubenswrapper[5058]: I1014 08:42:30.396417 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nfm9" event={"ID":"1f12e81a-15ee-49d4-a21a-89ca8a1ed438","Type":"ContainerDied","Data":"a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af"} Oct 14 08:42:31 crc kubenswrapper[5058]: I1014 08:42:31.407248 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nfm9" event={"ID":"1f12e81a-15ee-49d4-a21a-89ca8a1ed438","Type":"ContainerStarted","Data":"230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178"} Oct 14 08:42:31 crc kubenswrapper[5058]: I1014 08:42:31.442566 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9nfm9" podStartSLOduration=2.815976771 podStartE2EDuration="4.442534914s" podCreationTimestamp="2025-10-14 08:42:27 +0000 UTC" firstStartedPulling="2025-10-14 08:42:29.384706958 +0000 UTC m=+6897.295790764" lastFinishedPulling="2025-10-14 08:42:31.011265071 +0000 UTC m=+6898.922348907" observedRunningTime="2025-10-14 08:42:31.433240011 +0000 UTC m=+6899.344323897" watchObservedRunningTime="2025-10-14 08:42:31.442534914 +0000 UTC m=+6899.353618760" Oct 14 08:42:33 crc kubenswrapper[5058]: I1014 08:42:33.656152 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:42:33 crc kubenswrapper[5058]: I1014 08:42:33.656565 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:42:33 crc kubenswrapper[5058]: I1014 08:42:33.656623 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:42:33 crc kubenswrapper[5058]: I1014 08:42:33.657745 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a43b2198075bda9aa78711e34ddde57e7bb3c93314261f5c8037e73fbe5bf8e"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:42:33 crc kubenswrapper[5058]: I1014 08:42:33.658139 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://2a43b2198075bda9aa78711e34ddde57e7bb3c93314261f5c8037e73fbe5bf8e" gracePeriod=600 Oct 14 08:42:34 crc kubenswrapper[5058]: I1014 08:42:34.438192 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="2a43b2198075bda9aa78711e34ddde57e7bb3c93314261f5c8037e73fbe5bf8e" exitCode=0 Oct 14 08:42:34 crc kubenswrapper[5058]: I1014 08:42:34.438292 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"2a43b2198075bda9aa78711e34ddde57e7bb3c93314261f5c8037e73fbe5bf8e"} Oct 14 08:42:34 crc kubenswrapper[5058]: I1014 08:42:34.438668 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e"} Oct 14 08:42:34 crc kubenswrapper[5058]: I1014 08:42:34.438720 5058 scope.go:117] "RemoveContainer" containerID="a6cda8ed6aa7ebffb504fa99c17d32cd72b1e2a47484c587e7b6791013156669" Oct 14 08:42:38 crc kubenswrapper[5058]: I1014 08:42:38.112363 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:38 crc kubenswrapper[5058]: I1014 08:42:38.113062 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:38 crc kubenswrapper[5058]: I1014 08:42:38.177399 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:38 crc kubenswrapper[5058]: I1014 08:42:38.561564 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:38 crc kubenswrapper[5058]: I1014 08:42:38.633444 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nfm9"] Oct 14 08:42:40 crc kubenswrapper[5058]: I1014 08:42:40.508178 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9nfm9" podUID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerName="registry-server" containerID="cri-o://230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178" gracePeriod=2 Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.035565 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.090742 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x57nb\" (UniqueName: \"kubernetes.io/projected/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-kube-api-access-x57nb\") pod \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.090940 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-catalog-content\") pod \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.091056 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-utilities\") pod \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\" (UID: \"1f12e81a-15ee-49d4-a21a-89ca8a1ed438\") " Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.093255 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-utilities" (OuterVolumeSpecName: "utilities") pod "1f12e81a-15ee-49d4-a21a-89ca8a1ed438" (UID: "1f12e81a-15ee-49d4-a21a-89ca8a1ed438"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.100074 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-kube-api-access-x57nb" (OuterVolumeSpecName: "kube-api-access-x57nb") pod "1f12e81a-15ee-49d4-a21a-89ca8a1ed438" (UID: "1f12e81a-15ee-49d4-a21a-89ca8a1ed438"). InnerVolumeSpecName "kube-api-access-x57nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.110221 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f12e81a-15ee-49d4-a21a-89ca8a1ed438" (UID: "1f12e81a-15ee-49d4-a21a-89ca8a1ed438"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.193311 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x57nb\" (UniqueName: \"kubernetes.io/projected/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-kube-api-access-x57nb\") on node \"crc\" DevicePath \"\"" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.193404 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.193424 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f12e81a-15ee-49d4-a21a-89ca8a1ed438-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.522425 5058 generic.go:334] "Generic (PLEG): container finished" podID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerID="230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178" exitCode=0 Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.522523 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nfm9" event={"ID":"1f12e81a-15ee-49d4-a21a-89ca8a1ed438","Type":"ContainerDied","Data":"230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178"} Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.522603 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9nfm9" event={"ID":"1f12e81a-15ee-49d4-a21a-89ca8a1ed438","Type":"ContainerDied","Data":"2e0fb94913c67b6f3111f407052209dd838524aec02e5a991a7b03001d4c76ff"} Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.522633 5058 scope.go:117] "RemoveContainer" containerID="230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.522554 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9nfm9" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.569271 5058 scope.go:117] "RemoveContainer" containerID="a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.581624 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nfm9"] Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.591251 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9nfm9"] Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.604409 5058 scope.go:117] "RemoveContainer" containerID="777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.656587 5058 scope.go:117] "RemoveContainer" containerID="230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178" Oct 14 08:42:41 crc kubenswrapper[5058]: E1014 08:42:41.657184 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178\": container with ID starting with 230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178 not found: ID does not exist" containerID="230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.657236 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178"} err="failed to get container status \"230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178\": rpc error: code = NotFound desc = could not find container \"230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178\": container with ID starting with 230efe9105a603dcc9189280b99ad437545283aae8173a42d3bdf89a56c74178 not found: ID does not exist" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.657269 5058 scope.go:117] "RemoveContainer" containerID="a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af" Oct 14 08:42:41 crc kubenswrapper[5058]: E1014 08:42:41.658110 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af\": container with ID starting with a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af not found: ID does not exist" containerID="a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.658177 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af"} err="failed to get container status \"a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af\": rpc error: code = NotFound desc = could not find container \"a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af\": container with ID starting with a8941cd07dbe7c2e01ffc48878cee3b993ada5befbcc8d77dd114c969e4184af not found: ID does not exist" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.658216 5058 scope.go:117] "RemoveContainer" containerID="777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31" Oct 14 08:42:41 crc kubenswrapper[5058]: E1014 08:42:41.658721 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31\": container with ID starting with 777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31 not found: ID does not exist" containerID="777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31" Oct 14 08:42:41 crc kubenswrapper[5058]: I1014 08:42:41.658766 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31"} err="failed to get container status \"777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31\": rpc error: code = NotFound desc = could not find container \"777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31\": container with ID starting with 777a1029f2ed46f330974162cc117d925bc48d059c0ca0f4bfbeb4325c738a31 not found: ID does not exist" Oct 14 08:42:42 crc kubenswrapper[5058]: I1014 08:42:42.812039 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" path="/var/lib/kubelet/pods/1f12e81a-15ee-49d4-a21a-89ca8a1ed438/volumes" Oct 14 08:43:53 crc kubenswrapper[5058]: I1014 08:43:53.937887 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-zpvzc"] Oct 14 08:43:53 crc kubenswrapper[5058]: I1014 08:43:53.949381 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-zpvzc"] Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.099743 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-w7cjf"] Oct 14 08:43:54 crc kubenswrapper[5058]: E1014 08:43:54.100115 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerName="extract-utilities" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.100139 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerName="extract-utilities" Oct 14 08:43:54 crc kubenswrapper[5058]: E1014 08:43:54.100165 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerName="registry-server" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.100173 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerName="registry-server" Oct 14 08:43:54 crc kubenswrapper[5058]: E1014 08:43:54.100199 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerName="extract-content" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.100207 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerName="extract-content" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.100424 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f12e81a-15ee-49d4-a21a-89ca8a1ed438" containerName="registry-server" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.100976 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.102651 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.103730 5058 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nmkt2" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.104067 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.104339 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.114485 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-w7cjf"] Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.193301 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/81e88c14-2ac0-4e0b-9785-37bd63c74ead-node-mnt\") pod \"crc-storage-crc-w7cjf\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.193836 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhjt\" (UniqueName: \"kubernetes.io/projected/81e88c14-2ac0-4e0b-9785-37bd63c74ead-kube-api-access-2lhjt\") pod \"crc-storage-crc-w7cjf\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.193909 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/81e88c14-2ac0-4e0b-9785-37bd63c74ead-crc-storage\") pod \"crc-storage-crc-w7cjf\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.295341 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/81e88c14-2ac0-4e0b-9785-37bd63c74ead-crc-storage\") pod \"crc-storage-crc-w7cjf\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.295422 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/81e88c14-2ac0-4e0b-9785-37bd63c74ead-node-mnt\") pod \"crc-storage-crc-w7cjf\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.295516 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhjt\" (UniqueName: \"kubernetes.io/projected/81e88c14-2ac0-4e0b-9785-37bd63c74ead-kube-api-access-2lhjt\") pod \"crc-storage-crc-w7cjf\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.295882 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/81e88c14-2ac0-4e0b-9785-37bd63c74ead-node-mnt\") pod \"crc-storage-crc-w7cjf\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.296328 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/81e88c14-2ac0-4e0b-9785-37bd63c74ead-crc-storage\") pod \"crc-storage-crc-w7cjf\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.321191 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhjt\" (UniqueName: \"kubernetes.io/projected/81e88c14-2ac0-4e0b-9785-37bd63c74ead-kube-api-access-2lhjt\") pod \"crc-storage-crc-w7cjf\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.489366 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:54 crc kubenswrapper[5058]: I1014 08:43:54.810645 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b22025-6ec3-4bd1-9da1-4ee638664213" path="/var/lib/kubelet/pods/46b22025-6ec3-4bd1-9da1-4ee638664213/volumes" Oct 14 08:43:55 crc kubenswrapper[5058]: I1014 08:43:55.000098 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-w7cjf"] Oct 14 08:43:55 crc kubenswrapper[5058]: I1014 08:43:55.242936 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-w7cjf" event={"ID":"81e88c14-2ac0-4e0b-9785-37bd63c74ead","Type":"ContainerStarted","Data":"d6d565760685b945721b3d2b91d13ae5b24f24c0015b266c7d2f4b863a278a38"} Oct 14 08:43:56 crc kubenswrapper[5058]: I1014 08:43:56.254604 5058 generic.go:334] "Generic (PLEG): container finished" podID="81e88c14-2ac0-4e0b-9785-37bd63c74ead" containerID="bfde8c282ab91184765e7ba26539137fb0c1ef36846f08cd06351e7ad1bb3191" exitCode=0 Oct 14 08:43:56 crc kubenswrapper[5058]: I1014 08:43:56.254823 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-w7cjf" event={"ID":"81e88c14-2ac0-4e0b-9785-37bd63c74ead","Type":"ContainerDied","Data":"bfde8c282ab91184765e7ba26539137fb0c1ef36846f08cd06351e7ad1bb3191"} Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.680962 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.774976 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/81e88c14-2ac0-4e0b-9785-37bd63c74ead-crc-storage\") pod \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.775026 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lhjt\" (UniqueName: \"kubernetes.io/projected/81e88c14-2ac0-4e0b-9785-37bd63c74ead-kube-api-access-2lhjt\") pod \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.775058 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/81e88c14-2ac0-4e0b-9785-37bd63c74ead-node-mnt\") pod \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\" (UID: \"81e88c14-2ac0-4e0b-9785-37bd63c74ead\") " Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.775319 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81e88c14-2ac0-4e0b-9785-37bd63c74ead-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "81e88c14-2ac0-4e0b-9785-37bd63c74ead" (UID: "81e88c14-2ac0-4e0b-9785-37bd63c74ead"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.780619 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e88c14-2ac0-4e0b-9785-37bd63c74ead-kube-api-access-2lhjt" (OuterVolumeSpecName: "kube-api-access-2lhjt") pod "81e88c14-2ac0-4e0b-9785-37bd63c74ead" (UID: "81e88c14-2ac0-4e0b-9785-37bd63c74ead"). InnerVolumeSpecName "kube-api-access-2lhjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.809169 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e88c14-2ac0-4e0b-9785-37bd63c74ead-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "81e88c14-2ac0-4e0b-9785-37bd63c74ead" (UID: "81e88c14-2ac0-4e0b-9785-37bd63c74ead"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.877114 5058 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/81e88c14-2ac0-4e0b-9785-37bd63c74ead-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.877155 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lhjt\" (UniqueName: \"kubernetes.io/projected/81e88c14-2ac0-4e0b-9785-37bd63c74ead-kube-api-access-2lhjt\") on node \"crc\" DevicePath \"\"" Oct 14 08:43:57 crc kubenswrapper[5058]: I1014 08:43:57.877196 5058 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/81e88c14-2ac0-4e0b-9785-37bd63c74ead-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 14 08:43:58 crc kubenswrapper[5058]: I1014 08:43:58.279001 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-w7cjf" event={"ID":"81e88c14-2ac0-4e0b-9785-37bd63c74ead","Type":"ContainerDied","Data":"d6d565760685b945721b3d2b91d13ae5b24f24c0015b266c7d2f4b863a278a38"} Oct 14 08:43:58 crc kubenswrapper[5058]: I1014 08:43:58.279055 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6d565760685b945721b3d2b91d13ae5b24f24c0015b266c7d2f4b863a278a38" Oct 14 08:43:58 crc kubenswrapper[5058]: I1014 08:43:58.279144 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-w7cjf" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.172949 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-w7cjf"] Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.181305 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-w7cjf"] Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.324927 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-smhdz"] Oct 14 08:44:00 crc kubenswrapper[5058]: E1014 08:44:00.325410 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e88c14-2ac0-4e0b-9785-37bd63c74ead" containerName="storage" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.325444 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e88c14-2ac0-4e0b-9785-37bd63c74ead" containerName="storage" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.325724 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="81e88c14-2ac0-4e0b-9785-37bd63c74ead" containerName="storage" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.326582 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.330098 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.330980 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.331223 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.331558 5058 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nmkt2" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.346077 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-smhdz"] Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.522202 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8470a5c-e7c5-4b7a-a670-02850332f2d8-crc-storage\") pod \"crc-storage-crc-smhdz\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.522766 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wxl2\" (UniqueName: \"kubernetes.io/projected/b8470a5c-e7c5-4b7a-a670-02850332f2d8-kube-api-access-9wxl2\") pod \"crc-storage-crc-smhdz\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.522941 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8470a5c-e7c5-4b7a-a670-02850332f2d8-node-mnt\") pod \"crc-storage-crc-smhdz\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.625303 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wxl2\" (UniqueName: \"kubernetes.io/projected/b8470a5c-e7c5-4b7a-a670-02850332f2d8-kube-api-access-9wxl2\") pod \"crc-storage-crc-smhdz\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.625580 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8470a5c-e7c5-4b7a-a670-02850332f2d8-node-mnt\") pod \"crc-storage-crc-smhdz\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.625674 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8470a5c-e7c5-4b7a-a670-02850332f2d8-crc-storage\") pod \"crc-storage-crc-smhdz\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.626051 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8470a5c-e7c5-4b7a-a670-02850332f2d8-node-mnt\") pod \"crc-storage-crc-smhdz\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.628622 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8470a5c-e7c5-4b7a-a670-02850332f2d8-crc-storage\") pod \"crc-storage-crc-smhdz\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.660375 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wxl2\" (UniqueName: \"kubernetes.io/projected/b8470a5c-e7c5-4b7a-a670-02850332f2d8-kube-api-access-9wxl2\") pod \"crc-storage-crc-smhdz\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.812623 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e88c14-2ac0-4e0b-9785-37bd63c74ead" path="/var/lib/kubelet/pods/81e88c14-2ac0-4e0b-9785-37bd63c74ead/volumes" Oct 14 08:44:00 crc kubenswrapper[5058]: I1014 08:44:00.957274 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:01 crc kubenswrapper[5058]: I1014 08:44:01.461914 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-smhdz"] Oct 14 08:44:02 crc kubenswrapper[5058]: I1014 08:44:02.329686 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-smhdz" event={"ID":"b8470a5c-e7c5-4b7a-a670-02850332f2d8","Type":"ContainerStarted","Data":"8e63e1d8eb69f4d66722d131030492945459a145d6cbbe9b52111d48b5e05507"} Oct 14 08:44:02 crc kubenswrapper[5058]: I1014 08:44:02.330127 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-smhdz" event={"ID":"b8470a5c-e7c5-4b7a-a670-02850332f2d8","Type":"ContainerStarted","Data":"6a10448462ca82519a9c7c1cf67b2b6752d63a084c69d923ab5dc03f3613fb65"} Oct 14 08:44:02 crc kubenswrapper[5058]: I1014 08:44:02.360598 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-smhdz" podStartSLOduration=1.791356723 podStartE2EDuration="2.360572915s" podCreationTimestamp="2025-10-14 08:44:00 +0000 UTC" firstStartedPulling="2025-10-14 08:44:01.486680419 +0000 UTC m=+6989.397764255" lastFinishedPulling="2025-10-14 08:44:02.055896601 +0000 UTC m=+6989.966980447" observedRunningTime="2025-10-14 08:44:02.35320637 +0000 UTC m=+6990.264290246" watchObservedRunningTime="2025-10-14 08:44:02.360572915 +0000 UTC m=+6990.271656751" Oct 14 08:44:03 crc kubenswrapper[5058]: I1014 08:44:03.342940 5058 generic.go:334] "Generic (PLEG): container finished" podID="b8470a5c-e7c5-4b7a-a670-02850332f2d8" containerID="8e63e1d8eb69f4d66722d131030492945459a145d6cbbe9b52111d48b5e05507" exitCode=0 Oct 14 08:44:03 crc kubenswrapper[5058]: I1014 08:44:03.343063 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-smhdz" event={"ID":"b8470a5c-e7c5-4b7a-a670-02850332f2d8","Type":"ContainerDied","Data":"8e63e1d8eb69f4d66722d131030492945459a145d6cbbe9b52111d48b5e05507"} Oct 14 08:44:04 crc kubenswrapper[5058]: I1014 08:44:04.758175 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:04 crc kubenswrapper[5058]: I1014 08:44:04.920586 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8470a5c-e7c5-4b7a-a670-02850332f2d8-crc-storage\") pod \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " Oct 14 08:44:04 crc kubenswrapper[5058]: I1014 08:44:04.920727 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8470a5c-e7c5-4b7a-a670-02850332f2d8-node-mnt\") pod \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " Oct 14 08:44:04 crc kubenswrapper[5058]: I1014 08:44:04.920856 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wxl2\" (UniqueName: \"kubernetes.io/projected/b8470a5c-e7c5-4b7a-a670-02850332f2d8-kube-api-access-9wxl2\") pod \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\" (UID: \"b8470a5c-e7c5-4b7a-a670-02850332f2d8\") " Oct 14 08:44:04 crc kubenswrapper[5058]: I1014 08:44:04.920881 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8470a5c-e7c5-4b7a-a670-02850332f2d8-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b8470a5c-e7c5-4b7a-a670-02850332f2d8" (UID: "b8470a5c-e7c5-4b7a-a670-02850332f2d8"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 08:44:04 crc kubenswrapper[5058]: I1014 08:44:04.921470 5058 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8470a5c-e7c5-4b7a-a670-02850332f2d8-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 14 08:44:04 crc kubenswrapper[5058]: I1014 08:44:04.930727 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8470a5c-e7c5-4b7a-a670-02850332f2d8-kube-api-access-9wxl2" (OuterVolumeSpecName: "kube-api-access-9wxl2") pod "b8470a5c-e7c5-4b7a-a670-02850332f2d8" (UID: "b8470a5c-e7c5-4b7a-a670-02850332f2d8"). InnerVolumeSpecName "kube-api-access-9wxl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:44:04 crc kubenswrapper[5058]: I1014 08:44:04.944511 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8470a5c-e7c5-4b7a-a670-02850332f2d8-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b8470a5c-e7c5-4b7a-a670-02850332f2d8" (UID: "b8470a5c-e7c5-4b7a-a670-02850332f2d8"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:44:05 crc kubenswrapper[5058]: I1014 08:44:05.022983 5058 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8470a5c-e7c5-4b7a-a670-02850332f2d8-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 14 08:44:05 crc kubenswrapper[5058]: I1014 08:44:05.023282 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wxl2\" (UniqueName: \"kubernetes.io/projected/b8470a5c-e7c5-4b7a-a670-02850332f2d8-kube-api-access-9wxl2\") on node \"crc\" DevicePath \"\"" Oct 14 08:44:05 crc kubenswrapper[5058]: I1014 08:44:05.366242 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-smhdz" event={"ID":"b8470a5c-e7c5-4b7a-a670-02850332f2d8","Type":"ContainerDied","Data":"6a10448462ca82519a9c7c1cf67b2b6752d63a084c69d923ab5dc03f3613fb65"} Oct 14 08:44:05 crc kubenswrapper[5058]: I1014 08:44:05.366316 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a10448462ca82519a9c7c1cf67b2b6752d63a084c69d923ab5dc03f3613fb65" Oct 14 08:44:05 crc kubenswrapper[5058]: I1014 08:44:05.366334 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-smhdz" Oct 14 08:44:33 crc kubenswrapper[5058]: I1014 08:44:33.656238 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:44:33 crc kubenswrapper[5058]: I1014 08:44:33.656908 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:44:40 crc kubenswrapper[5058]: I1014 08:44:40.335826 5058 scope.go:117] "RemoveContainer" containerID="2814f53bbc9e56c00725c9e922d34cb8904cb61d98a5955178f1d41aa429a203" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.193638 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q"] Oct 14 08:45:00 crc kubenswrapper[5058]: E1014 08:45:00.194756 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8470a5c-e7c5-4b7a-a670-02850332f2d8" containerName="storage" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.194778 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8470a5c-e7c5-4b7a-a670-02850332f2d8" containerName="storage" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.195214 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8470a5c-e7c5-4b7a-a670-02850332f2d8" containerName="storage" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.195782 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.200920 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.201348 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.216969 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q"] Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.345252 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nlt5\" (UniqueName: \"kubernetes.io/projected/79a08084-e1c4-4449-8c5c-c15d016cfa6a-kube-api-access-6nlt5\") pod \"collect-profiles-29340525-7gg2q\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.345318 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79a08084-e1c4-4449-8c5c-c15d016cfa6a-secret-volume\") pod \"collect-profiles-29340525-7gg2q\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.345411 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79a08084-e1c4-4449-8c5c-c15d016cfa6a-config-volume\") pod \"collect-profiles-29340525-7gg2q\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.446269 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79a08084-e1c4-4449-8c5c-c15d016cfa6a-config-volume\") pod \"collect-profiles-29340525-7gg2q\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.446369 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nlt5\" (UniqueName: \"kubernetes.io/projected/79a08084-e1c4-4449-8c5c-c15d016cfa6a-kube-api-access-6nlt5\") pod \"collect-profiles-29340525-7gg2q\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.446403 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79a08084-e1c4-4449-8c5c-c15d016cfa6a-secret-volume\") pod \"collect-profiles-29340525-7gg2q\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.448492 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79a08084-e1c4-4449-8c5c-c15d016cfa6a-config-volume\") pod \"collect-profiles-29340525-7gg2q\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.455683 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79a08084-e1c4-4449-8c5c-c15d016cfa6a-secret-volume\") pod \"collect-profiles-29340525-7gg2q\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.473265 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nlt5\" (UniqueName: \"kubernetes.io/projected/79a08084-e1c4-4449-8c5c-c15d016cfa6a-kube-api-access-6nlt5\") pod \"collect-profiles-29340525-7gg2q\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.526199 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.831144 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q"] Oct 14 08:45:00 crc kubenswrapper[5058]: I1014 08:45:00.907087 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" event={"ID":"79a08084-e1c4-4449-8c5c-c15d016cfa6a","Type":"ContainerStarted","Data":"fc7e4b7c47c7cfe2c60677d6b14d64d275866dcffbe71cdeaf5bb9ffdb3beb34"} Oct 14 08:45:01 crc kubenswrapper[5058]: I1014 08:45:01.918426 5058 generic.go:334] "Generic (PLEG): container finished" podID="79a08084-e1c4-4449-8c5c-c15d016cfa6a" containerID="90695985ee0c5bc379f7d9ec3b769db298b00ecf5c6250dff0669d07031968ba" exitCode=0 Oct 14 08:45:01 crc kubenswrapper[5058]: I1014 08:45:01.919937 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" event={"ID":"79a08084-e1c4-4449-8c5c-c15d016cfa6a","Type":"ContainerDied","Data":"90695985ee0c5bc379f7d9ec3b769db298b00ecf5c6250dff0669d07031968ba"} Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.226625 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.291972 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nlt5\" (UniqueName: \"kubernetes.io/projected/79a08084-e1c4-4449-8c5c-c15d016cfa6a-kube-api-access-6nlt5\") pod \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.292045 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79a08084-e1c4-4449-8c5c-c15d016cfa6a-config-volume\") pod \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.292067 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79a08084-e1c4-4449-8c5c-c15d016cfa6a-secret-volume\") pod \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\" (UID: \"79a08084-e1c4-4449-8c5c-c15d016cfa6a\") " Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.299436 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a08084-e1c4-4449-8c5c-c15d016cfa6a-kube-api-access-6nlt5" (OuterVolumeSpecName: "kube-api-access-6nlt5") pod "79a08084-e1c4-4449-8c5c-c15d016cfa6a" (UID: "79a08084-e1c4-4449-8c5c-c15d016cfa6a"). InnerVolumeSpecName "kube-api-access-6nlt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.300005 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a08084-e1c4-4449-8c5c-c15d016cfa6a-config-volume" (OuterVolumeSpecName: "config-volume") pod "79a08084-e1c4-4449-8c5c-c15d016cfa6a" (UID: "79a08084-e1c4-4449-8c5c-c15d016cfa6a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.303627 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a08084-e1c4-4449-8c5c-c15d016cfa6a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "79a08084-e1c4-4449-8c5c-c15d016cfa6a" (UID: "79a08084-e1c4-4449-8c5c-c15d016cfa6a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.394154 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nlt5\" (UniqueName: \"kubernetes.io/projected/79a08084-e1c4-4449-8c5c-c15d016cfa6a-kube-api-access-6nlt5\") on node \"crc\" DevicePath \"\"" Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.394440 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79a08084-e1c4-4449-8c5c-c15d016cfa6a-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.394520 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79a08084-e1c4-4449-8c5c-c15d016cfa6a-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.655561 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.655767 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.937525 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" event={"ID":"79a08084-e1c4-4449-8c5c-c15d016cfa6a","Type":"ContainerDied","Data":"fc7e4b7c47c7cfe2c60677d6b14d64d275866dcffbe71cdeaf5bb9ffdb3beb34"} Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.937916 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc7e4b7c47c7cfe2c60677d6b14d64d275866dcffbe71cdeaf5bb9ffdb3beb34" Oct 14 08:45:03 crc kubenswrapper[5058]: I1014 08:45:03.937605 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q" Oct 14 08:45:04 crc kubenswrapper[5058]: I1014 08:45:04.302275 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc"] Oct 14 08:45:04 crc kubenswrapper[5058]: I1014 08:45:04.317482 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340480-gsmrc"] Oct 14 08:45:04 crc kubenswrapper[5058]: I1014 08:45:04.806501 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3322ac8f-e633-4400-992a-f1f762b6f65c" path="/var/lib/kubelet/pods/3322ac8f-e633-4400-992a-f1f762b6f65c/volumes" Oct 14 08:45:33 crc kubenswrapper[5058]: I1014 08:45:33.655784 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:45:33 crc kubenswrapper[5058]: I1014 08:45:33.656498 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:45:33 crc kubenswrapper[5058]: I1014 08:45:33.656549 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:45:33 crc kubenswrapper[5058]: I1014 08:45:33.657249 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:45:33 crc kubenswrapper[5058]: I1014 08:45:33.657336 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" gracePeriod=600 Oct 14 08:45:33 crc kubenswrapper[5058]: E1014 08:45:33.790894 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:45:34 crc kubenswrapper[5058]: I1014 08:45:34.240234 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" exitCode=0 Oct 14 08:45:34 crc kubenswrapper[5058]: I1014 08:45:34.240320 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e"} Oct 14 08:45:34 crc kubenswrapper[5058]: I1014 08:45:34.240432 5058 scope.go:117] "RemoveContainer" containerID="2a43b2198075bda9aa78711e34ddde57e7bb3c93314261f5c8037e73fbe5bf8e" Oct 14 08:45:34 crc kubenswrapper[5058]: I1014 08:45:34.241162 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:45:34 crc kubenswrapper[5058]: E1014 08:45:34.241533 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:45:40 crc kubenswrapper[5058]: I1014 08:45:40.424764 5058 scope.go:117] "RemoveContainer" containerID="71d45c55e6c5164bfe0a1fcf48e5652ae2cf9fa883656d87f7d39b2a533d6136" Oct 14 08:45:47 crc kubenswrapper[5058]: I1014 08:45:47.791026 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:45:47 crc kubenswrapper[5058]: E1014 08:45:47.792146 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:45:59 crc kubenswrapper[5058]: I1014 08:45:59.790491 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:45:59 crc kubenswrapper[5058]: E1014 08:45:59.791541 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.698320 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bd8799f99-ncn2m"] Oct 14 08:46:06 crc kubenswrapper[5058]: E1014 08:46:06.698974 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a08084-e1c4-4449-8c5c-c15d016cfa6a" containerName="collect-profiles" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.698987 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a08084-e1c4-4449-8c5c-c15d016cfa6a" containerName="collect-profiles" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.699140 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a08084-e1c4-4449-8c5c-c15d016cfa6a" containerName="collect-profiles" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.699846 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.701354 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.701789 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kp82r" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.701928 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.702030 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.703260 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.715445 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd8799f99-ncn2m"] Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.839286 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-config\") pod \"dnsmasq-dns-7bd8799f99-ncn2m\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.839344 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v2kk\" (UniqueName: \"kubernetes.io/projected/4270a2b5-97e2-456a-80ce-8aacdac8abc6-kube-api-access-4v2kk\") pod \"dnsmasq-dns-7bd8799f99-ncn2m\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.839486 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-dns-svc\") pod \"dnsmasq-dns-7bd8799f99-ncn2m\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.940764 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-dns-svc\") pod \"dnsmasq-dns-7bd8799f99-ncn2m\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.940855 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-config\") pod \"dnsmasq-dns-7bd8799f99-ncn2m\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.940891 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v2kk\" (UniqueName: \"kubernetes.io/projected/4270a2b5-97e2-456a-80ce-8aacdac8abc6-kube-api-access-4v2kk\") pod \"dnsmasq-dns-7bd8799f99-ncn2m\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.942386 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-dns-svc\") pod \"dnsmasq-dns-7bd8799f99-ncn2m\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.942902 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-config\") pod \"dnsmasq-dns-7bd8799f99-ncn2m\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:06 crc kubenswrapper[5058]: I1014 08:46:06.965355 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v2kk\" (UniqueName: \"kubernetes.io/projected/4270a2b5-97e2-456a-80ce-8aacdac8abc6-kube-api-access-4v2kk\") pod \"dnsmasq-dns-7bd8799f99-ncn2m\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.023789 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.038248 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-547c9989d7-9jvn4"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.039342 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.053505 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-547c9989d7-9jvn4"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.151548 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q27w4\" (UniqueName: \"kubernetes.io/projected/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-kube-api-access-q27w4\") pod \"dnsmasq-dns-547c9989d7-9jvn4\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.151861 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-config\") pod \"dnsmasq-dns-547c9989d7-9jvn4\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.151890 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-dns-svc\") pod \"dnsmasq-dns-547c9989d7-9jvn4\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.253468 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-config\") pod \"dnsmasq-dns-547c9989d7-9jvn4\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.253511 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-dns-svc\") pod \"dnsmasq-dns-547c9989d7-9jvn4\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.253591 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q27w4\" (UniqueName: \"kubernetes.io/projected/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-kube-api-access-q27w4\") pod \"dnsmasq-dns-547c9989d7-9jvn4\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.254562 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-config\") pod \"dnsmasq-dns-547c9989d7-9jvn4\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.254961 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-dns-svc\") pod \"dnsmasq-dns-547c9989d7-9jvn4\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.316961 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd8799f99-ncn2m"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.326593 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q27w4\" (UniqueName: \"kubernetes.io/projected/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-kube-api-access-q27w4\") pod \"dnsmasq-dns-547c9989d7-9jvn4\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.348229 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6687b8c-vmz7v"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.349295 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.401866 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.459893 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6687b8c-vmz7v"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.479529 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-config\") pod \"dnsmasq-dns-6f6687b8c-vmz7v\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.479841 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6826\" (UniqueName: \"kubernetes.io/projected/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-kube-api-access-t6826\") pod \"dnsmasq-dns-6f6687b8c-vmz7v\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.479868 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-dns-svc\") pod \"dnsmasq-dns-6f6687b8c-vmz7v\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.562524 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd8799f99-ncn2m"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.580576 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-config\") pod \"dnsmasq-dns-6f6687b8c-vmz7v\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.580614 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6826\" (UniqueName: \"kubernetes.io/projected/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-kube-api-access-t6826\") pod \"dnsmasq-dns-6f6687b8c-vmz7v\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.580638 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-dns-svc\") pod \"dnsmasq-dns-6f6687b8c-vmz7v\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.581471 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-dns-svc\") pod \"dnsmasq-dns-6f6687b8c-vmz7v\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.581473 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-config\") pod \"dnsmasq-dns-6f6687b8c-vmz7v\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: W1014 08:46:07.583210 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4270a2b5_97e2_456a_80ce_8aacdac8abc6.slice/crio-bcf1a8c7f0dd2cffcd4b9057d0859752a1f1892f00ced1f012d32fe015fb658b WatchSource:0}: Error finding container bcf1a8c7f0dd2cffcd4b9057d0859752a1f1892f00ced1f012d32fe015fb658b: Status 404 returned error can't find the container with id bcf1a8c7f0dd2cffcd4b9057d0859752a1f1892f00ced1f012d32fe015fb658b Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.584846 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.615022 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6826\" (UniqueName: \"kubernetes.io/projected/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-kube-api-access-t6826\") pod \"dnsmasq-dns-6f6687b8c-vmz7v\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.674855 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547c9989d7-9jvn4"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.701350 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dd4cb54d5-s7fg5"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.702522 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.707182 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.714708 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd4cb54d5-s7fg5"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.879071 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.880806 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.884246 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.884786 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hts89" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.884962 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.885078 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.886908 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.892591 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvt5z\" (UniqueName: \"kubernetes.io/projected/91309438-7cfa-47a2-ad04-5d9a326bd940-kube-api-access-pvt5z\") pod \"dnsmasq-dns-dd4cb54d5-s7fg5\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.892649 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-dns-svc\") pod \"dnsmasq-dns-dd4cb54d5-s7fg5\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.892669 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-config\") pod \"dnsmasq-dns-dd4cb54d5-s7fg5\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.907892 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.972107 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547c9989d7-9jvn4"] Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993425 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvt5z\" (UniqueName: \"kubernetes.io/projected/91309438-7cfa-47a2-ad04-5d9a326bd940-kube-api-access-pvt5z\") pod \"dnsmasq-dns-dd4cb54d5-s7fg5\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993684 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxhc\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-kube-api-access-5hxhc\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993722 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993750 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-dns-svc\") pod \"dnsmasq-dns-dd4cb54d5-s7fg5\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993770 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993804 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-config\") pod \"dnsmasq-dns-dd4cb54d5-s7fg5\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993836 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/566005b2-aadc-466a-a1b6-ad8614d9998a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993856 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/566005b2-aadc-466a-a1b6-ad8614d9998a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993876 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993900 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993920 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.993965 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.994764 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-config\") pod \"dnsmasq-dns-dd4cb54d5-s7fg5\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:07 crc kubenswrapper[5058]: I1014 08:46:07.995173 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-dns-svc\") pod \"dnsmasq-dns-dd4cb54d5-s7fg5\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.010249 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvt5z\" (UniqueName: \"kubernetes.io/projected/91309438-7cfa-47a2-ad04-5d9a326bd940-kube-api-access-pvt5z\") pod \"dnsmasq-dns-dd4cb54d5-s7fg5\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.039512 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.054073 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6687b8c-vmz7v"] Oct 14 08:46:08 crc kubenswrapper[5058]: W1014 08:46:08.066724 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c614fd_dae6_4cbc_bc5b_362c55fcf2b4.slice/crio-f90d1426629cee73225467bd436b2b1ad9da6dc657e54abc465f5e3c464c3097 WatchSource:0}: Error finding container f90d1426629cee73225467bd436b2b1ad9da6dc657e54abc465f5e3c464c3097: Status 404 returned error can't find the container with id f90d1426629cee73225467bd436b2b1ad9da6dc657e54abc465f5e3c464c3097 Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.096275 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.096330 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hxhc\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-kube-api-access-5hxhc\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.096364 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.096391 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.096419 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/566005b2-aadc-466a-a1b6-ad8614d9998a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.096440 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/566005b2-aadc-466a-a1b6-ad8614d9998a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.096456 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.096482 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.096505 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.097512 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.097943 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.100333 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.100361 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b04d1845d37e5dca694210cfb352c5b6bd7172e5b051bdad229e398ff013847a/globalmount\"" pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.100988 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.101106 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.101154 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.104937 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/566005b2-aadc-466a-a1b6-ad8614d9998a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.111266 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/566005b2-aadc-466a-a1b6-ad8614d9998a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.120253 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hxhc\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-kube-api-access-5hxhc\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.132292 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") pod \"rabbitmq-server-0\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.155058 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.156348 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.160134 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.160275 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6qlnn" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.160377 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.160707 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.160732 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.179735 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.227582 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.298751 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2981ba1e-b068-49f1-9d95-77ff66de181b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.298825 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.298861 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2981ba1e-b068-49f1-9d95-77ff66de181b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.298907 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.298933 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.298967 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.298988 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.299013 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.299032 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq7tb\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-kube-api-access-cq7tb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.400508 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2981ba1e-b068-49f1-9d95-77ff66de181b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.400541 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.400567 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2981ba1e-b068-49f1-9d95-77ff66de181b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.400620 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.400639 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.400669 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.400685 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.400703 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.400718 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq7tb\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-kube-api-access-cq7tb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.403375 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.403919 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.404644 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2981ba1e-b068-49f1-9d95-77ff66de181b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.405444 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.408230 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.408044 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2981ba1e-b068-49f1-9d95-77ff66de181b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.409282 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.410626 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.410658 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/929b0ca848490a0e7ae29651f7670beeca843a85a67f644a00cd5fbdad3ded20/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.420248 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq7tb\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-kube-api-access-cq7tb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.451396 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") pod \"rabbitmq-cell1-server-0\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.481520 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.499063 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell2-server-0"] Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.506635 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.508598 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell2-erlang-cookie" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.512072 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell2-server-conf" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.517552 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell2-plugins-conf" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.517768 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell2-default-user" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.518048 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell2-server-dockercfg-nbckx" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.518619 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell2-server-0"] Oct 14 08:46:08 crc kubenswrapper[5058]: W1014 08:46:08.523632 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91309438_7cfa_47a2_ad04_5d9a326bd940.slice/crio-9334eba3f4398e673b336bc920bcad74ae1dbcf2adb0530b45a15391537e5feb WatchSource:0}: Error finding container 9334eba3f4398e673b336bc920bcad74ae1dbcf2adb0530b45a15391537e5feb: Status 404 returned error can't find the container with id 9334eba3f4398e673b336bc920bcad74ae1dbcf2adb0530b45a15391537e5feb Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.527784 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd4cb54d5-s7fg5"] Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.565188 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" event={"ID":"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4","Type":"ContainerStarted","Data":"f90d1426629cee73225467bd436b2b1ad9da6dc657e54abc465f5e3c464c3097"} Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.567478 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" event={"ID":"4270a2b5-97e2-456a-80ce-8aacdac8abc6","Type":"ContainerStarted","Data":"bcf1a8c7f0dd2cffcd4b9057d0859752a1f1892f00ced1f012d32fe015fb658b"} Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.568924 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" event={"ID":"91309438-7cfa-47a2-ad04-5d9a326bd940","Type":"ContainerStarted","Data":"9334eba3f4398e673b336bc920bcad74ae1dbcf2adb0530b45a15391537e5feb"} Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.570208 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" event={"ID":"6a9d6e51-442b-4fa5-aef9-dd93d13829b2","Type":"ContainerStarted","Data":"69ebeaa7acc7dd64e71ac601232d1ca33f6043835e693a5e10720fcc1f6661c1"} Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.605335 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/552c2732-565b-4a0b-bbc2-64ca8e832310-plugins-conf\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.605378 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp2tj\" (UniqueName: \"kubernetes.io/projected/552c2732-565b-4a0b-bbc2-64ca8e832310-kube-api-access-dp2tj\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.605457 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/552c2732-565b-4a0b-bbc2-64ca8e832310-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.605489 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a3bfde24-b4b6-4010-9bea-cb8a43d9e3e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3bfde24-b4b6-4010-9bea-cb8a43d9e3e6\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.605527 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/552c2732-565b-4a0b-bbc2-64ca8e832310-erlang-cookie-secret\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.605551 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/552c2732-565b-4a0b-bbc2-64ca8e832310-pod-info\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.605567 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/552c2732-565b-4a0b-bbc2-64ca8e832310-rabbitmq-confd\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.605588 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/552c2732-565b-4a0b-bbc2-64ca8e832310-rabbitmq-plugins\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.605604 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/552c2732-565b-4a0b-bbc2-64ca8e832310-server-conf\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.672623 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 08:46:08 crc kubenswrapper[5058]: W1014 08:46:08.693921 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod566005b2_aadc_466a_a1b6_ad8614d9998a.slice/crio-a31af6cda1c2b694c9151e1bc4f03776799c3e8bfea7e0ea56b5373b961de631 WatchSource:0}: Error finding container a31af6cda1c2b694c9151e1bc4f03776799c3e8bfea7e0ea56b5373b961de631: Status 404 returned error can't find the container with id a31af6cda1c2b694c9151e1bc4f03776799c3e8bfea7e0ea56b5373b961de631 Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.706441 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/552c2732-565b-4a0b-bbc2-64ca8e832310-erlang-cookie-secret\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.706647 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/552c2732-565b-4a0b-bbc2-64ca8e832310-pod-info\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.706675 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/552c2732-565b-4a0b-bbc2-64ca8e832310-rabbitmq-confd\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.706702 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/552c2732-565b-4a0b-bbc2-64ca8e832310-rabbitmq-plugins\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.706725 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/552c2732-565b-4a0b-bbc2-64ca8e832310-server-conf\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.706822 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/552c2732-565b-4a0b-bbc2-64ca8e832310-plugins-conf\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.706855 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp2tj\" (UniqueName: \"kubernetes.io/projected/552c2732-565b-4a0b-bbc2-64ca8e832310-kube-api-access-dp2tj\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.706888 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/552c2732-565b-4a0b-bbc2-64ca8e832310-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.706925 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a3bfde24-b4b6-4010-9bea-cb8a43d9e3e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3bfde24-b4b6-4010-9bea-cb8a43d9e3e6\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.708216 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/552c2732-565b-4a0b-bbc2-64ca8e832310-plugins-conf\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.708549 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/552c2732-565b-4a0b-bbc2-64ca8e832310-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.709044 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/552c2732-565b-4a0b-bbc2-64ca8e832310-server-conf\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.709442 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/552c2732-565b-4a0b-bbc2-64ca8e832310-rabbitmq-plugins\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.711903 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.711935 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a3bfde24-b4b6-4010-9bea-cb8a43d9e3e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3bfde24-b4b6-4010-9bea-cb8a43d9e3e6\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/241ab17ea282ec662dc1c177cea2810cf5fd47645cfd5a3cedb810ecced16f27/globalmount\"" pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.712103 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/552c2732-565b-4a0b-bbc2-64ca8e832310-erlang-cookie-secret\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.716220 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/552c2732-565b-4a0b-bbc2-64ca8e832310-pod-info\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.716393 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/552c2732-565b-4a0b-bbc2-64ca8e832310-rabbitmq-confd\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.725438 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp2tj\" (UniqueName: \"kubernetes.io/projected/552c2732-565b-4a0b-bbc2-64ca8e832310-kube-api-access-dp2tj\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.725732 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.734774 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.738402 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.738840 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.739043 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.739486 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.739783 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.739955 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5lx79" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.750458 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.776283 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a3bfde24-b4b6-4010-9bea-cb8a43d9e3e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3bfde24-b4b6-4010-9bea-cb8a43d9e3e6\") pod \"rabbitmq-cell2-server-0\" (UID: \"552c2732-565b-4a0b-bbc2-64ca8e832310\") " pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.808695 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654ae580-516a-4b85-a9e8-39ebff319e12-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.808752 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/654ae580-516a-4b85-a9e8-39ebff319e12-config-data-default\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.808980 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/654ae580-516a-4b85-a9e8-39ebff319e12-config-data-generated\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.809083 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-09c1b350-9192-4ab0-b1a2-57329f4c1bf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09c1b350-9192-4ab0-b1a2-57329f4c1bf7\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.809123 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/654ae580-516a-4b85-a9e8-39ebff319e12-secrets\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.814525 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/654ae580-516a-4b85-a9e8-39ebff319e12-kolla-config\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.814814 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/654ae580-516a-4b85-a9e8-39ebff319e12-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.814847 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6m65\" (UniqueName: \"kubernetes.io/projected/654ae580-516a-4b85-a9e8-39ebff319e12-kube-api-access-r6m65\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.814873 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/654ae580-516a-4b85-a9e8-39ebff319e12-operator-scripts\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.844126 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.858737 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell3-server-0"] Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.868376 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.869864 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell3-default-user" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.873661 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell3-erlang-cookie" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.873870 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell3-server-conf" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.874103 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell3-plugins-conf" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.874265 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell3-server-dockercfg-8td5r" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.878640 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell3-server-0"] Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.920246 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/654ae580-516a-4b85-a9e8-39ebff319e12-operator-scripts\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.920310 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654ae580-516a-4b85-a9e8-39ebff319e12-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.920336 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/654ae580-516a-4b85-a9e8-39ebff319e12-config-data-default\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.920513 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/654ae580-516a-4b85-a9e8-39ebff319e12-config-data-generated\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.920838 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-09c1b350-9192-4ab0-b1a2-57329f4c1bf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09c1b350-9192-4ab0-b1a2-57329f4c1bf7\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.920915 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/654ae580-516a-4b85-a9e8-39ebff319e12-secrets\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.920932 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/654ae580-516a-4b85-a9e8-39ebff319e12-kolla-config\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.921089 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/654ae580-516a-4b85-a9e8-39ebff319e12-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.921110 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6m65\" (UniqueName: \"kubernetes.io/projected/654ae580-516a-4b85-a9e8-39ebff319e12-kube-api-access-r6m65\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.921236 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/654ae580-516a-4b85-a9e8-39ebff319e12-config-data-default\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.921932 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/654ae580-516a-4b85-a9e8-39ebff319e12-config-data-generated\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.923474 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/654ae580-516a-4b85-a9e8-39ebff319e12-kolla-config\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.924761 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/654ae580-516a-4b85-a9e8-39ebff319e12-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.927922 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.927994 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-09c1b350-9192-4ab0-b1a2-57329f4c1bf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09c1b350-9192-4ab0-b1a2-57329f4c1bf7\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/30cd12de0b6194f69e5ea6a6b73137c706853b37ab0c63fc03b6fa45e4d1b725/globalmount\"" pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.928094 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/654ae580-516a-4b85-a9e8-39ebff319e12-secrets\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.928391 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654ae580-516a-4b85-a9e8-39ebff319e12-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.930730 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/654ae580-516a-4b85-a9e8-39ebff319e12-operator-scripts\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.937612 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6m65\" (UniqueName: \"kubernetes.io/projected/654ae580-516a-4b85-a9e8-39ebff319e12-kube-api-access-r6m65\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.970011 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-09c1b350-9192-4ab0-b1a2-57329f4c1bf7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09c1b350-9192-4ab0-b1a2-57329f4c1bf7\") pod \"openstack-galera-0\" (UID: \"654ae580-516a-4b85-a9e8-39ebff319e12\") " pod="openstack/openstack-galera-0" Oct 14 08:46:08 crc kubenswrapper[5058]: I1014 08:46:08.984079 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.023845 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3001759e-a841-4846-8879-5e8fc14f1f5e-rabbitmq-plugins\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.023883 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3001759e-a841-4846-8879-5e8fc14f1f5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.023906 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3001759e-a841-4846-8879-5e8fc14f1f5e-erlang-cookie-secret\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.024026 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3001759e-a841-4846-8879-5e8fc14f1f5e-plugins-conf\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.024079 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8ea33183-28e3-4601-9efa-4b7d78d60742\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ea33183-28e3-4601-9efa-4b7d78d60742\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.024733 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp9fh\" (UniqueName: \"kubernetes.io/projected/3001759e-a841-4846-8879-5e8fc14f1f5e-kube-api-access-dp9fh\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.024781 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3001759e-a841-4846-8879-5e8fc14f1f5e-rabbitmq-confd\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.025429 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3001759e-a841-4846-8879-5e8fc14f1f5e-pod-info\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.025498 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3001759e-a841-4846-8879-5e8fc14f1f5e-server-conf\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.067621 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.128582 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3001759e-a841-4846-8879-5e8fc14f1f5e-server-conf\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.128645 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3001759e-a841-4846-8879-5e8fc14f1f5e-rabbitmq-plugins\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.128668 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3001759e-a841-4846-8879-5e8fc14f1f5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.128687 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3001759e-a841-4846-8879-5e8fc14f1f5e-erlang-cookie-secret\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.128733 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3001759e-a841-4846-8879-5e8fc14f1f5e-plugins-conf\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.128761 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8ea33183-28e3-4601-9efa-4b7d78d60742\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ea33183-28e3-4601-9efa-4b7d78d60742\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.128785 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp9fh\" (UniqueName: \"kubernetes.io/projected/3001759e-a841-4846-8879-5e8fc14f1f5e-kube-api-access-dp9fh\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.128830 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3001759e-a841-4846-8879-5e8fc14f1f5e-rabbitmq-confd\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.128869 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3001759e-a841-4846-8879-5e8fc14f1f5e-pod-info\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.132123 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3001759e-a841-4846-8879-5e8fc14f1f5e-plugins-conf\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.132969 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3001759e-a841-4846-8879-5e8fc14f1f5e-rabbitmq-plugins\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.135377 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3001759e-a841-4846-8879-5e8fc14f1f5e-rabbitmq-confd\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.136543 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3001759e-a841-4846-8879-5e8fc14f1f5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.142388 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3001759e-a841-4846-8879-5e8fc14f1f5e-server-conf\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.145518 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.145550 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8ea33183-28e3-4601-9efa-4b7d78d60742\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ea33183-28e3-4601-9efa-4b7d78d60742\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dc290685504b6820f06f7d42a4e3a6c96b3f88a6b45947990892331a79ca77d1/globalmount\"" pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.151659 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3001759e-a841-4846-8879-5e8fc14f1f5e-pod-info\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.154183 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3001759e-a841-4846-8879-5e8fc14f1f5e-erlang-cookie-secret\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.160560 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp9fh\" (UniqueName: \"kubernetes.io/projected/3001759e-a841-4846-8879-5e8fc14f1f5e-kube-api-access-dp9fh\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.198102 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8ea33183-28e3-4601-9efa-4b7d78d60742\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ea33183-28e3-4601-9efa-4b7d78d60742\") pod \"rabbitmq-cell3-server-0\" (UID: \"3001759e-a841-4846-8879-5e8fc14f1f5e\") " pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.333999 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell2-server-0"] Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.500089 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.584378 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2981ba1e-b068-49f1-9d95-77ff66de181b","Type":"ContainerStarted","Data":"7a5a23f9e84997a0acbedf281deba5e44e61d1266b637bb8a4611da512a9e3d3"} Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.587088 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"566005b2-aadc-466a-a1b6-ad8614d9998a","Type":"ContainerStarted","Data":"a31af6cda1c2b694c9151e1bc4f03776799c3e8bfea7e0ea56b5373b961de631"} Oct 14 08:46:09 crc kubenswrapper[5058]: I1014 08:46:09.622614 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.147557 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.149751 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.152702 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.152999 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.153004 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.153332 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-trgfn" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.173067 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.249232 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284bfce7-71a1-418e-a39d-9636fa58aec6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.249319 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e44990f-a3a0-4ea4-96c6-3fc1f726d600\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e44990f-a3a0-4ea4-96c6-3fc1f726d600\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.249987 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/284bfce7-71a1-418e-a39d-9636fa58aec6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.250387 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/284bfce7-71a1-418e-a39d-9636fa58aec6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.250534 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/284bfce7-71a1-418e-a39d-9636fa58aec6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.250590 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnm8\" (UniqueName: \"kubernetes.io/projected/284bfce7-71a1-418e-a39d-9636fa58aec6-kube-api-access-fnnm8\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.250612 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/284bfce7-71a1-418e-a39d-9636fa58aec6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.250762 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/284bfce7-71a1-418e-a39d-9636fa58aec6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.250852 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/284bfce7-71a1-418e-a39d-9636fa58aec6-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.352365 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/284bfce7-71a1-418e-a39d-9636fa58aec6-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.352440 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284bfce7-71a1-418e-a39d-9636fa58aec6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.352463 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6e44990f-a3a0-4ea4-96c6-3fc1f726d600\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e44990f-a3a0-4ea4-96c6-3fc1f726d600\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.352486 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/284bfce7-71a1-418e-a39d-9636fa58aec6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.352521 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/284bfce7-71a1-418e-a39d-9636fa58aec6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.352545 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/284bfce7-71a1-418e-a39d-9636fa58aec6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.352587 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnnm8\" (UniqueName: \"kubernetes.io/projected/284bfce7-71a1-418e-a39d-9636fa58aec6-kube-api-access-fnnm8\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.352614 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/284bfce7-71a1-418e-a39d-9636fa58aec6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.352640 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/284bfce7-71a1-418e-a39d-9636fa58aec6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.353060 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/284bfce7-71a1-418e-a39d-9636fa58aec6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.353946 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/284bfce7-71a1-418e-a39d-9636fa58aec6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.356942 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/284bfce7-71a1-418e-a39d-9636fa58aec6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.357432 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/284bfce7-71a1-418e-a39d-9636fa58aec6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.358281 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/284bfce7-71a1-418e-a39d-9636fa58aec6-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.359868 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/284bfce7-71a1-418e-a39d-9636fa58aec6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.360403 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284bfce7-71a1-418e-a39d-9636fa58aec6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.373351 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.373388 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6e44990f-a3a0-4ea4-96c6-3fc1f726d600\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e44990f-a3a0-4ea4-96c6-3fc1f726d600\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/286def4ceda2cc3acaff41ded56c6cc63892327d5df04c1778eaf6e6086e482b/globalmount\"" pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.380396 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnnm8\" (UniqueName: \"kubernetes.io/projected/284bfce7-71a1-418e-a39d-9636fa58aec6-kube-api-access-fnnm8\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.417133 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6e44990f-a3a0-4ea4-96c6-3fc1f726d600\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e44990f-a3a0-4ea4-96c6-3fc1f726d600\") pod \"openstack-cell1-galera-0\" (UID: \"284bfce7-71a1-418e-a39d-9636fa58aec6\") " pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.471514 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.565069 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell3-server-0"] Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.606861 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell2-server-0" event={"ID":"552c2732-565b-4a0b-bbc2-64ca8e832310","Type":"ContainerStarted","Data":"829ff96861ca1459f1fdb499fdccaa4a721084341b49b8bb23dba4ca10a938e2"} Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.618077 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"654ae580-516a-4b85-a9e8-39ebff319e12","Type":"ContainerStarted","Data":"32d48e7fbeb708a1b19ccb1f7acc36242439e277abeb77a3dbfa26ff7829fab3"} Oct 14 08:46:10 crc kubenswrapper[5058]: W1014 08:46:10.673367 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3001759e_a841_4846_8879_5e8fc14f1f5e.slice/crio-a76a85ebf62f01786fcfbaeca7875b6f4285827dfc92a09d31dd34bf5151fbe3 WatchSource:0}: Error finding container a76a85ebf62f01786fcfbaeca7875b6f4285827dfc92a09d31dd34bf5151fbe3: Status 404 returned error can't find the container with id a76a85ebf62f01786fcfbaeca7875b6f4285827dfc92a09d31dd34bf5151fbe3 Oct 14 08:46:10 crc kubenswrapper[5058]: I1014 08:46:10.950394 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 08:46:10 crc kubenswrapper[5058]: W1014 08:46:10.959555 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod284bfce7_71a1_418e_a39d_9636fa58aec6.slice/crio-2755740d7bf201601097140525f0cffdf98ff99b4d21c3b4d08798c4b5be4fef WatchSource:0}: Error finding container 2755740d7bf201601097140525f0cffdf98ff99b4d21c3b4d08798c4b5be4fef: Status 404 returned error can't find the container with id 2755740d7bf201601097140525f0cffdf98ff99b4d21c3b4d08798c4b5be4fef Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.643604 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell3-server-0" event={"ID":"3001759e-a841-4846-8879-5e8fc14f1f5e","Type":"ContainerStarted","Data":"a76a85ebf62f01786fcfbaeca7875b6f4285827dfc92a09d31dd34bf5151fbe3"} Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.645002 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell2-galera-0"] Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.647053 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.647417 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"284bfce7-71a1-418e-a39d-9636fa58aec6","Type":"ContainerStarted","Data":"2755740d7bf201601097140525f0cffdf98ff99b4d21c3b4d08798c4b5be4fef"} Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.649332 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell2-scripts" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.651258 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell2-dockercfg-qzj9f" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.651434 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell2-config-data" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.651619 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell2-svc" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.662635 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell2-galera-0"] Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.776129 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4947a1e1-3c94-4e63-ae34-1a1540c3f121-operator-scripts\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.776471 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm8l9\" (UniqueName: \"kubernetes.io/projected/4947a1e1-3c94-4e63-ae34-1a1540c3f121-kube-api-access-xm8l9\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.776503 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4947a1e1-3c94-4e63-ae34-1a1540c3f121-config-data-default\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.776534 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/4947a1e1-3c94-4e63-ae34-1a1540c3f121-secrets\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.776555 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4947a1e1-3c94-4e63-ae34-1a1540c3f121-kolla-config\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.776580 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-12da15b7-2c56-4eac-9e0e-4f5cc524a757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12da15b7-2c56-4eac-9e0e-4f5cc524a757\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.776605 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4947a1e1-3c94-4e63-ae34-1a1540c3f121-galera-tls-certs\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.776645 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4947a1e1-3c94-4e63-ae34-1a1540c3f121-combined-ca-bundle\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.776688 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4947a1e1-3c94-4e63-ae34-1a1540c3f121-config-data-generated\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.878277 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4947a1e1-3c94-4e63-ae34-1a1540c3f121-operator-scripts\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.878361 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm8l9\" (UniqueName: \"kubernetes.io/projected/4947a1e1-3c94-4e63-ae34-1a1540c3f121-kube-api-access-xm8l9\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.878401 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4947a1e1-3c94-4e63-ae34-1a1540c3f121-config-data-default\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.878438 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/4947a1e1-3c94-4e63-ae34-1a1540c3f121-secrets\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.878467 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4947a1e1-3c94-4e63-ae34-1a1540c3f121-kolla-config\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.878499 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-12da15b7-2c56-4eac-9e0e-4f5cc524a757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12da15b7-2c56-4eac-9e0e-4f5cc524a757\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.878527 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4947a1e1-3c94-4e63-ae34-1a1540c3f121-galera-tls-certs\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.878581 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4947a1e1-3c94-4e63-ae34-1a1540c3f121-combined-ca-bundle\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.878638 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4947a1e1-3c94-4e63-ae34-1a1540c3f121-config-data-generated\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.881677 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4947a1e1-3c94-4e63-ae34-1a1540c3f121-config-data-generated\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.882001 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4947a1e1-3c94-4e63-ae34-1a1540c3f121-kolla-config\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.882061 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4947a1e1-3c94-4e63-ae34-1a1540c3f121-config-data-default\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.884962 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4947a1e1-3c94-4e63-ae34-1a1540c3f121-operator-scripts\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.886896 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4947a1e1-3c94-4e63-ae34-1a1540c3f121-combined-ca-bundle\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.887242 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.887384 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-12da15b7-2c56-4eac-9e0e-4f5cc524a757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12da15b7-2c56-4eac-9e0e-4f5cc524a757\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/68f3c0d20294e027473a736f2a18295181e428e7a1b2892424df482ed759aeb3/globalmount\"" pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.888321 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4947a1e1-3c94-4e63-ae34-1a1540c3f121-galera-tls-certs\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.892010 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/4947a1e1-3c94-4e63-ae34-1a1540c3f121-secrets\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.898764 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm8l9\" (UniqueName: \"kubernetes.io/projected/4947a1e1-3c94-4e63-ae34-1a1540c3f121-kube-api-access-xm8l9\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.936192 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.939292 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.942765 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-llbvj" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.948015 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.962656 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.980569 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3da7def0-22f7-4151-95ff-dbf540934e38-kolla-config\") pod \"memcached-0\" (UID: \"3da7def0-22f7-4151-95ff-dbf540934e38\") " pod="openstack/memcached-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.980684 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmfql\" (UniqueName: \"kubernetes.io/projected/3da7def0-22f7-4151-95ff-dbf540934e38-kube-api-access-hmfql\") pod \"memcached-0\" (UID: \"3da7def0-22f7-4151-95ff-dbf540934e38\") " pod="openstack/memcached-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.980753 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3da7def0-22f7-4151-95ff-dbf540934e38-config-data\") pod \"memcached-0\" (UID: \"3da7def0-22f7-4151-95ff-dbf540934e38\") " pod="openstack/memcached-0" Oct 14 08:46:11 crc kubenswrapper[5058]: I1014 08:46:11.988678 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-12da15b7-2c56-4eac-9e0e-4f5cc524a757\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12da15b7-2c56-4eac-9e0e-4f5cc524a757\") pod \"openstack-cell2-galera-0\" (UID: \"4947a1e1-3c94-4e63-ae34-1a1540c3f121\") " pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.082736 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmfql\" (UniqueName: \"kubernetes.io/projected/3da7def0-22f7-4151-95ff-dbf540934e38-kube-api-access-hmfql\") pod \"memcached-0\" (UID: \"3da7def0-22f7-4151-95ff-dbf540934e38\") " pod="openstack/memcached-0" Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.082832 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3da7def0-22f7-4151-95ff-dbf540934e38-config-data\") pod \"memcached-0\" (UID: \"3da7def0-22f7-4151-95ff-dbf540934e38\") " pod="openstack/memcached-0" Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.082887 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3da7def0-22f7-4151-95ff-dbf540934e38-kolla-config\") pod \"memcached-0\" (UID: \"3da7def0-22f7-4151-95ff-dbf540934e38\") " pod="openstack/memcached-0" Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.083942 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3da7def0-22f7-4151-95ff-dbf540934e38-kolla-config\") pod \"memcached-0\" (UID: \"3da7def0-22f7-4151-95ff-dbf540934e38\") " pod="openstack/memcached-0" Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.084257 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3da7def0-22f7-4151-95ff-dbf540934e38-config-data\") pod \"memcached-0\" (UID: \"3da7def0-22f7-4151-95ff-dbf540934e38\") " pod="openstack/memcached-0" Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.098364 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmfql\" (UniqueName: \"kubernetes.io/projected/3da7def0-22f7-4151-95ff-dbf540934e38-kube-api-access-hmfql\") pod \"memcached-0\" (UID: \"3da7def0-22f7-4151-95ff-dbf540934e38\") " pod="openstack/memcached-0" Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.277031 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.278015 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.755085 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 08:46:12 crc kubenswrapper[5058]: I1014 08:46:12.856545 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell2-galera-0"] Oct 14 08:46:12 crc kubenswrapper[5058]: W1014 08:46:12.860922 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4947a1e1_3c94_4e63_ae34_1a1540c3f121.slice/crio-511f11a8609a26ef4cab8f4fd137d2b783730fbd3e25446b7d9ffbed8da8efd4 WatchSource:0}: Error finding container 511f11a8609a26ef4cab8f4fd137d2b783730fbd3e25446b7d9ffbed8da8efd4: Status 404 returned error can't find the container with id 511f11a8609a26ef4cab8f4fd137d2b783730fbd3e25446b7d9ffbed8da8efd4 Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.092655 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell3-galera-0"] Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.098921 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.101876 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell3-galera-0"] Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.104989 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell3-svc" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.105155 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell3-dockercfg-mlblk" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.105280 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell3-config-data" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.105402 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell3-scripts" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.217473 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-config-data-default\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.217666 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-config-data-generated\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.217698 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-secrets\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.217840 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97b087ba-718a-491e-ae13-c629d5a8ac80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b087ba-718a-491e-ae13-c629d5a8ac80\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.217954 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-kolla-config\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.218030 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-combined-ca-bundle\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.218063 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qbrg\" (UniqueName: \"kubernetes.io/projected/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-kube-api-access-2qbrg\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.218107 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-operator-scripts\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.218256 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-galera-tls-certs\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.319737 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-config-data-default\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.319812 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-config-data-generated\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.319836 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-secrets\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.319858 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97b087ba-718a-491e-ae13-c629d5a8ac80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b087ba-718a-491e-ae13-c629d5a8ac80\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.319896 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-kolla-config\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.319936 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-combined-ca-bundle\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.319964 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qbrg\" (UniqueName: \"kubernetes.io/projected/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-kube-api-access-2qbrg\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.320003 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-operator-scripts\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.320051 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-galera-tls-certs\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.325612 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-config-data-generated\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.325692 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-kolla-config\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.325832 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-config-data-default\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.326278 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-operator-scripts\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.326942 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.326971 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97b087ba-718a-491e-ae13-c629d5a8ac80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b087ba-718a-491e-ae13-c629d5a8ac80\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c3e1124ba69f1d4ca0b9e9e8aa49c10b3523894eb69db9c4a8121a34c759757/globalmount\"" pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.327372 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-galera-tls-certs\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.331231 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-secrets\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.332877 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-combined-ca-bundle\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.344861 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qbrg\" (UniqueName: \"kubernetes.io/projected/aca65ca2-d9c7-4553-9ebe-5a4a9f22a040-kube-api-access-2qbrg\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.359036 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97b087ba-718a-491e-ae13-c629d5a8ac80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b087ba-718a-491e-ae13-c629d5a8ac80\") pod \"openstack-cell3-galera-0\" (UID: \"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040\") " pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.450101 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.677231 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell2-galera-0" event={"ID":"4947a1e1-3c94-4e63-ae34-1a1540c3f121","Type":"ContainerStarted","Data":"511f11a8609a26ef4cab8f4fd137d2b783730fbd3e25446b7d9ffbed8da8efd4"} Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.679842 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3da7def0-22f7-4151-95ff-dbf540934e38","Type":"ContainerStarted","Data":"f128d64481818907ebe39d3ab85b5daf51a8ee9238322e739414b084ef77cd56"} Oct 14 08:46:13 crc kubenswrapper[5058]: I1014 08:46:13.789507 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:46:13 crc kubenswrapper[5058]: E1014 08:46:13.789720 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:46:16 crc kubenswrapper[5058]: I1014 08:46:16.075608 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell3-galera-0"] Oct 14 08:46:16 crc kubenswrapper[5058]: I1014 08:46:16.705865 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell3-galera-0" event={"ID":"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040","Type":"ContainerStarted","Data":"4bdaf702d47d07dbdee51e151510d1f846105618c8b28344165fec7cf396529b"} Oct 14 08:46:25 crc kubenswrapper[5058]: I1014 08:46:25.791552 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:46:25 crc kubenswrapper[5058]: E1014 08:46:25.792310 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:46:28 crc kubenswrapper[5058]: I1014 08:46:28.828129 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"654ae580-516a-4b85-a9e8-39ebff319e12","Type":"ContainerStarted","Data":"49c6d2d290f17c0a84901451434b0ff710db04ccc664f867ca347fd7e91f739d"} Oct 14 08:46:28 crc kubenswrapper[5058]: I1014 08:46:28.832832 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3da7def0-22f7-4151-95ff-dbf540934e38","Type":"ContainerStarted","Data":"8fc63234a6807720bb4c4f28109293e94beb8b9ddaa3242125064ea7a2b35170"} Oct 14 08:46:28 crc kubenswrapper[5058]: I1014 08:46:28.832979 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 14 08:46:28 crc kubenswrapper[5058]: I1014 08:46:28.937532 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.7701517239999998 podStartE2EDuration="17.937506051s" podCreationTimestamp="2025-10-14 08:46:11 +0000 UTC" firstStartedPulling="2025-10-14 08:46:12.772921195 +0000 UTC m=+7120.684005001" lastFinishedPulling="2025-10-14 08:46:27.940275512 +0000 UTC m=+7135.851359328" observedRunningTime="2025-10-14 08:46:28.930949289 +0000 UTC m=+7136.842033095" watchObservedRunningTime="2025-10-14 08:46:28.937506051 +0000 UTC m=+7136.848589867" Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.846740 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell2-galera-0" event={"ID":"4947a1e1-3c94-4e63-ae34-1a1540c3f121","Type":"ContainerStarted","Data":"1a2a4f3ca349624b9c2a9487bcc5e4dd6f3446de0ba1ebd74815ae036a68f73d"} Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.851341 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell3-galera-0" event={"ID":"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040","Type":"ContainerStarted","Data":"fbdb8ec4a1610afe1759b2cb79d27f4f862e79326f13279ad1dd3430033be54e"} Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.853492 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell2-server-0" event={"ID":"552c2732-565b-4a0b-bbc2-64ca8e832310","Type":"ContainerStarted","Data":"57fdc1e4eeb6bff8666046f1a11a05905ea23a5d74ee01638a0e27dc374c34d7"} Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.855966 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2981ba1e-b068-49f1-9d95-77ff66de181b","Type":"ContainerStarted","Data":"3c352f089d49affe1535a0d8522d7a95c738710c8fe5244d10425d96a1310663"} Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.859202 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"284bfce7-71a1-418e-a39d-9636fa58aec6","Type":"ContainerStarted","Data":"14d27d2fc40ab5b5b6a1c10ed4501a3636560e38150ce357b5196a37ba82720a"} Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.860892 5058 generic.go:334] "Generic (PLEG): container finished" podID="4270a2b5-97e2-456a-80ce-8aacdac8abc6" containerID="938fbf65011736868a200f6d16e034872eb8dfd0a434f0370259bb4c219d7054" exitCode=0 Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.860961 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" event={"ID":"4270a2b5-97e2-456a-80ce-8aacdac8abc6","Type":"ContainerDied","Data":"938fbf65011736868a200f6d16e034872eb8dfd0a434f0370259bb4c219d7054"} Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.869021 5058 generic.go:334] "Generic (PLEG): container finished" podID="91309438-7cfa-47a2-ad04-5d9a326bd940" containerID="44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3" exitCode=0 Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.869079 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" event={"ID":"91309438-7cfa-47a2-ad04-5d9a326bd940","Type":"ContainerDied","Data":"44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3"} Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.871100 5058 generic.go:334] "Generic (PLEG): container finished" podID="6a9d6e51-442b-4fa5-aef9-dd93d13829b2" containerID="b1c99b7e3ca68efbe5707735f7da26f4e96bd70587e389757e8acf319015bdf0" exitCode=0 Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.871195 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" event={"ID":"6a9d6e51-442b-4fa5-aef9-dd93d13829b2","Type":"ContainerDied","Data":"b1c99b7e3ca68efbe5707735f7da26f4e96bd70587e389757e8acf319015bdf0"} Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.873882 5058 generic.go:334] "Generic (PLEG): container finished" podID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" containerID="26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798" exitCode=0 Oct 14 08:46:29 crc kubenswrapper[5058]: I1014 08:46:29.874113 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" event={"ID":"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4","Type":"ContainerDied","Data":"26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798"} Oct 14 08:46:30 crc kubenswrapper[5058]: E1014 08:46:30.205534 5058 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 14 08:46:30 crc kubenswrapper[5058]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 08:46:30 crc kubenswrapper[5058]: > podSandboxID="f90d1426629cee73225467bd436b2b1ad9da6dc657e54abc465f5e3c464c3097" Oct 14 08:46:30 crc kubenswrapper[5058]: E1014 08:46:30.206000 5058 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 14 08:46:30 crc kubenswrapper[5058]: container &Container{Name:dnsmasq-dns,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:0468cb21803d466b2abfe00835cf1d2d,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dh596h64h55bh578h8bh585h57dh564h84h5f5h596hchcfh5c6h8h66fh68bh84h7dh675h5f9h58chfchc5h677h557hbdh8fh645h77h56dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6826,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6f6687b8c-vmz7v_openstack(50c614fd-dae6-4cbc-bc5b-362c55fcf2b4): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 08:46:30 crc kubenswrapper[5058]: > logger="UnhandledError" Oct 14 08:46:30 crc kubenswrapper[5058]: E1014 08:46:30.207353 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" podUID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.290967 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.295895 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.355301 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-dns-svc\") pod \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.355378 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-config\") pod \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.355402 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-config\") pod \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.355480 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q27w4\" (UniqueName: \"kubernetes.io/projected/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-kube-api-access-q27w4\") pod \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.355573 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-dns-svc\") pod \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\" (UID: \"6a9d6e51-442b-4fa5-aef9-dd93d13829b2\") " Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.355612 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v2kk\" (UniqueName: \"kubernetes.io/projected/4270a2b5-97e2-456a-80ce-8aacdac8abc6-kube-api-access-4v2kk\") pod \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.360014 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-kube-api-access-q27w4" (OuterVolumeSpecName: "kube-api-access-q27w4") pod "6a9d6e51-442b-4fa5-aef9-dd93d13829b2" (UID: "6a9d6e51-442b-4fa5-aef9-dd93d13829b2"). InnerVolumeSpecName "kube-api-access-q27w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.360061 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4270a2b5-97e2-456a-80ce-8aacdac8abc6-kube-api-access-4v2kk" (OuterVolumeSpecName: "kube-api-access-4v2kk") pod "4270a2b5-97e2-456a-80ce-8aacdac8abc6" (UID: "4270a2b5-97e2-456a-80ce-8aacdac8abc6"). InnerVolumeSpecName "kube-api-access-4v2kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.374198 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-config" (OuterVolumeSpecName: "config") pod "6a9d6e51-442b-4fa5-aef9-dd93d13829b2" (UID: "6a9d6e51-442b-4fa5-aef9-dd93d13829b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.378328 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a9d6e51-442b-4fa5-aef9-dd93d13829b2" (UID: "6a9d6e51-442b-4fa5-aef9-dd93d13829b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:46:30 crc kubenswrapper[5058]: E1014 08:46:30.378767 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-dns-svc podName:4270a2b5-97e2-456a-80ce-8aacdac8abc6 nodeName:}" failed. No retries permitted until 2025-10-14 08:46:30.878735227 +0000 UTC m=+7138.789819043 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-dns-svc") pod "4270a2b5-97e2-456a-80ce-8aacdac8abc6" (UID: "4270a2b5-97e2-456a-80ce-8aacdac8abc6") : error deleting /var/lib/kubelet/pods/4270a2b5-97e2-456a-80ce-8aacdac8abc6/volume-subpaths: remove /var/lib/kubelet/pods/4270a2b5-97e2-456a-80ce-8aacdac8abc6/volume-subpaths: no such file or directory Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.379055 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-config" (OuterVolumeSpecName: "config") pod "4270a2b5-97e2-456a-80ce-8aacdac8abc6" (UID: "4270a2b5-97e2-456a-80ce-8aacdac8abc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.457452 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-config\") on node \"crc\" DevicePath \"\"" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.457768 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-config\") on node \"crc\" DevicePath \"\"" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.457948 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q27w4\" (UniqueName: \"kubernetes.io/projected/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-kube-api-access-q27w4\") on node \"crc\" DevicePath \"\"" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.458115 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a9d6e51-442b-4fa5-aef9-dd93d13829b2-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.458246 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v2kk\" (UniqueName: \"kubernetes.io/projected/4270a2b5-97e2-456a-80ce-8aacdac8abc6-kube-api-access-4v2kk\") on node \"crc\" DevicePath \"\"" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.883052 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" event={"ID":"6a9d6e51-442b-4fa5-aef9-dd93d13829b2","Type":"ContainerDied","Data":"69ebeaa7acc7dd64e71ac601232d1ca33f6043835e693a5e10720fcc1f6661c1"} Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.883083 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547c9989d7-9jvn4" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.883479 5058 scope.go:117] "RemoveContainer" containerID="b1c99b7e3ca68efbe5707735f7da26f4e96bd70587e389757e8acf319015bdf0" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.886108 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell3-server-0" event={"ID":"3001759e-a841-4846-8879-5e8fc14f1f5e","Type":"ContainerStarted","Data":"50d42e27e8a0e06f472eecb1716793ffe602404c9d121855c8890b5e021665b2"} Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.887937 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"566005b2-aadc-466a-a1b6-ad8614d9998a","Type":"ContainerStarted","Data":"4f4253ec9724b81aa68d6805539e6565dd94f0b8a24efa7fedf7e102f3031c07"} Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.895845 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.895926 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd8799f99-ncn2m" event={"ID":"4270a2b5-97e2-456a-80ce-8aacdac8abc6","Type":"ContainerDied","Data":"bcf1a8c7f0dd2cffcd4b9057d0859752a1f1892f00ced1f012d32fe015fb658b"} Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.905626 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" event={"ID":"91309438-7cfa-47a2-ad04-5d9a326bd940","Type":"ContainerStarted","Data":"4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789"} Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.938412 5058 scope.go:117] "RemoveContainer" containerID="938fbf65011736868a200f6d16e034872eb8dfd0a434f0370259bb4c219d7054" Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.965685 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-dns-svc\") pod \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\" (UID: \"4270a2b5-97e2-456a-80ce-8aacdac8abc6\") " Oct 14 08:46:30 crc kubenswrapper[5058]: I1014 08:46:30.970124 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4270a2b5-97e2-456a-80ce-8aacdac8abc6" (UID: "4270a2b5-97e2-456a-80ce-8aacdac8abc6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.004707 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" podStartSLOduration=4.427207797 podStartE2EDuration="24.00461468s" podCreationTimestamp="2025-10-14 08:46:07 +0000 UTC" firstStartedPulling="2025-10-14 08:46:08.539972347 +0000 UTC m=+7116.451056153" lastFinishedPulling="2025-10-14 08:46:28.11737922 +0000 UTC m=+7136.028463036" observedRunningTime="2025-10-14 08:46:30.998601074 +0000 UTC m=+7138.909684890" watchObservedRunningTime="2025-10-14 08:46:31.00461468 +0000 UTC m=+7138.915698496" Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.053213 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547c9989d7-9jvn4"] Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.058051 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-547c9989d7-9jvn4"] Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.073716 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4270a2b5-97e2-456a-80ce-8aacdac8abc6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.243030 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd8799f99-ncn2m"] Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.248915 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bd8799f99-ncn2m"] Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.919297 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" event={"ID":"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4","Type":"ContainerStarted","Data":"acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd"} Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.919890 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.921288 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:31 crc kubenswrapper[5058]: I1014 08:46:31.943954 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" podStartSLOduration=5.03123582 podStartE2EDuration="24.943938884s" podCreationTimestamp="2025-10-14 08:46:07 +0000 UTC" firstStartedPulling="2025-10-14 08:46:08.069763854 +0000 UTC m=+7115.980847660" lastFinishedPulling="2025-10-14 08:46:27.982466918 +0000 UTC m=+7135.893550724" observedRunningTime="2025-10-14 08:46:31.939666889 +0000 UTC m=+7139.850750785" watchObservedRunningTime="2025-10-14 08:46:31.943938884 +0000 UTC m=+7139.855022690" Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.810320 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4270a2b5-97e2-456a-80ce-8aacdac8abc6" path="/var/lib/kubelet/pods/4270a2b5-97e2-456a-80ce-8aacdac8abc6/volumes" Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.811413 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9d6e51-442b-4fa5-aef9-dd93d13829b2" path="/var/lib/kubelet/pods/6a9d6e51-442b-4fa5-aef9-dd93d13829b2/volumes" Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.926652 5058 generic.go:334] "Generic (PLEG): container finished" podID="4947a1e1-3c94-4e63-ae34-1a1540c3f121" containerID="1a2a4f3ca349624b9c2a9487bcc5e4dd6f3446de0ba1ebd74815ae036a68f73d" exitCode=0 Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.926716 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell2-galera-0" event={"ID":"4947a1e1-3c94-4e63-ae34-1a1540c3f121","Type":"ContainerDied","Data":"1a2a4f3ca349624b9c2a9487bcc5e4dd6f3446de0ba1ebd74815ae036a68f73d"} Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.929033 5058 generic.go:334] "Generic (PLEG): container finished" podID="aca65ca2-d9c7-4553-9ebe-5a4a9f22a040" containerID="fbdb8ec4a1610afe1759b2cb79d27f4f862e79326f13279ad1dd3430033be54e" exitCode=0 Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.929114 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell3-galera-0" event={"ID":"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040","Type":"ContainerDied","Data":"fbdb8ec4a1610afe1759b2cb79d27f4f862e79326f13279ad1dd3430033be54e"} Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.933295 5058 generic.go:334] "Generic (PLEG): container finished" podID="654ae580-516a-4b85-a9e8-39ebff319e12" containerID="49c6d2d290f17c0a84901451434b0ff710db04ccc664f867ca347fd7e91f739d" exitCode=0 Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.933369 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"654ae580-516a-4b85-a9e8-39ebff319e12","Type":"ContainerDied","Data":"49c6d2d290f17c0a84901451434b0ff710db04ccc664f867ca347fd7e91f739d"} Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.935517 5058 generic.go:334] "Generic (PLEG): container finished" podID="284bfce7-71a1-418e-a39d-9636fa58aec6" containerID="14d27d2fc40ab5b5b6a1c10ed4501a3636560e38150ce357b5196a37ba82720a" exitCode=0 Oct 14 08:46:32 crc kubenswrapper[5058]: I1014 08:46:32.935565 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"284bfce7-71a1-418e-a39d-9636fa58aec6","Type":"ContainerDied","Data":"14d27d2fc40ab5b5b6a1c10ed4501a3636560e38150ce357b5196a37ba82720a"} Oct 14 08:46:33 crc kubenswrapper[5058]: I1014 08:46:33.949882 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"284bfce7-71a1-418e-a39d-9636fa58aec6","Type":"ContainerStarted","Data":"7582bef5ff2949b526a60f97b441c08022009d44f6136612afce200324da2f4a"} Oct 14 08:46:33 crc kubenswrapper[5058]: I1014 08:46:33.952567 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell2-galera-0" event={"ID":"4947a1e1-3c94-4e63-ae34-1a1540c3f121","Type":"ContainerStarted","Data":"edab8ba2ea1f9443339a57c319051210015854f2205c0171b8f4bef0bc6c5630"} Oct 14 08:46:33 crc kubenswrapper[5058]: I1014 08:46:33.955393 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell3-galera-0" event={"ID":"aca65ca2-d9c7-4553-9ebe-5a4a9f22a040","Type":"ContainerStarted","Data":"384b8cb77ac2586586d55a6c7f992a4d86f396677c5e9d75a81d39d2d77df2c6"} Oct 14 08:46:33 crc kubenswrapper[5058]: I1014 08:46:33.958425 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"654ae580-516a-4b85-a9e8-39ebff319e12","Type":"ContainerStarted","Data":"5f4c0b5b78f6dca61222fb8e28fbf068551932cfb3d3b03297fe0a34a61a11a4"} Oct 14 08:46:33 crc kubenswrapper[5058]: I1014 08:46:33.993978 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.976431 podStartE2EDuration="24.993956401s" podCreationTimestamp="2025-10-14 08:46:09 +0000 UTC" firstStartedPulling="2025-10-14 08:46:10.963159315 +0000 UTC m=+7118.874243121" lastFinishedPulling="2025-10-14 08:46:27.980684676 +0000 UTC m=+7135.891768522" observedRunningTime="2025-10-14 08:46:33.988331506 +0000 UTC m=+7141.899415402" watchObservedRunningTime="2025-10-14 08:46:33.993956401 +0000 UTC m=+7141.905040217" Oct 14 08:46:34 crc kubenswrapper[5058]: I1014 08:46:34.023373 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.187733699 podStartE2EDuration="27.023351082s" podCreationTimestamp="2025-10-14 08:46:07 +0000 UTC" firstStartedPulling="2025-10-14 08:46:10.081084709 +0000 UTC m=+7117.992168515" lastFinishedPulling="2025-10-14 08:46:27.916702082 +0000 UTC m=+7135.827785898" observedRunningTime="2025-10-14 08:46:34.018395997 +0000 UTC m=+7141.929479833" watchObservedRunningTime="2025-10-14 08:46:34.023351082 +0000 UTC m=+7141.934434928" Oct 14 08:46:34 crc kubenswrapper[5058]: I1014 08:46:34.059663 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell3-galera-0" podStartSLOduration=10.144752616 podStartE2EDuration="22.059638715s" podCreationTimestamp="2025-10-14 08:46:12 +0000 UTC" firstStartedPulling="2025-10-14 08:46:16.087477672 +0000 UTC m=+7123.998561478" lastFinishedPulling="2025-10-14 08:46:28.002363771 +0000 UTC m=+7135.913447577" observedRunningTime="2025-10-14 08:46:34.053380001 +0000 UTC m=+7141.964463857" watchObservedRunningTime="2025-10-14 08:46:34.059638715 +0000 UTC m=+7141.970722561" Oct 14 08:46:34 crc kubenswrapper[5058]: I1014 08:46:34.089905 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell2-galera-0" podStartSLOduration=8.995900232 podStartE2EDuration="24.08987962s" podCreationTimestamp="2025-10-14 08:46:10 +0000 UTC" firstStartedPulling="2025-10-14 08:46:12.86560407 +0000 UTC m=+7120.776687876" lastFinishedPulling="2025-10-14 08:46:27.959583448 +0000 UTC m=+7135.870667264" observedRunningTime="2025-10-14 08:46:34.089005445 +0000 UTC m=+7142.000089271" watchObservedRunningTime="2025-10-14 08:46:34.08987962 +0000 UTC m=+7142.000963466" Oct 14 08:46:37 crc kubenswrapper[5058]: I1014 08:46:37.278842 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 14 08:46:37 crc kubenswrapper[5058]: I1014 08:46:37.709969 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:37 crc kubenswrapper[5058]: I1014 08:46:37.789765 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:46:37 crc kubenswrapper[5058]: E1014 08:46:37.790085 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.041171 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.085540 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6687b8c-vmz7v"] Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.085723 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" podUID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" containerName="dnsmasq-dns" containerID="cri-o://acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd" gracePeriod=10 Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.548731 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.624290 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-config\") pod \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.624360 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-dns-svc\") pod \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.624434 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6826\" (UniqueName: \"kubernetes.io/projected/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-kube-api-access-t6826\") pod \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\" (UID: \"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4\") " Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.630765 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-kube-api-access-t6826" (OuterVolumeSpecName: "kube-api-access-t6826") pod "50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" (UID: "50c614fd-dae6-4cbc-bc5b-362c55fcf2b4"). InnerVolumeSpecName "kube-api-access-t6826". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.663414 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-config" (OuterVolumeSpecName: "config") pod "50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" (UID: "50c614fd-dae6-4cbc-bc5b-362c55fcf2b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.665167 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" (UID: "50c614fd-dae6-4cbc-bc5b-362c55fcf2b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.726301 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-config\") on node \"crc\" DevicePath \"\"" Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.726335 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 08:46:38 crc kubenswrapper[5058]: I1014 08:46:38.726347 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6826\" (UniqueName: \"kubernetes.io/projected/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4-kube-api-access-t6826\") on node \"crc\" DevicePath \"\"" Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.009773 5058 generic.go:334] "Generic (PLEG): container finished" podID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" containerID="acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd" exitCode=0 Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.009870 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" event={"ID":"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4","Type":"ContainerDied","Data":"acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd"} Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.009960 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" event={"ID":"50c614fd-dae6-4cbc-bc5b-362c55fcf2b4","Type":"ContainerDied","Data":"f90d1426629cee73225467bd436b2b1ad9da6dc657e54abc465f5e3c464c3097"} Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.009998 5058 scope.go:117] "RemoveContainer" containerID="acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd" Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.010097 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6687b8c-vmz7v" Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.040341 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6687b8c-vmz7v"] Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.048114 5058 scope.go:117] "RemoveContainer" containerID="26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798" Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.049281 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6687b8c-vmz7v"] Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.067982 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.068027 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.072153 5058 scope.go:117] "RemoveContainer" containerID="acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd" Oct 14 08:46:39 crc kubenswrapper[5058]: E1014 08:46:39.072568 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd\": container with ID starting with acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd not found: ID does not exist" containerID="acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd" Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.072642 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd"} err="failed to get container status \"acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd\": rpc error: code = NotFound desc = could not find container \"acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd\": container with ID starting with acadf5fa93a3f17a2b9e4143fa6cf1dcf3a5c4c5f1b554a58343180f64d81fdd not found: ID does not exist" Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.072678 5058 scope.go:117] "RemoveContainer" containerID="26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798" Oct 14 08:46:39 crc kubenswrapper[5058]: E1014 08:46:39.073127 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798\": container with ID starting with 26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798 not found: ID does not exist" containerID="26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798" Oct 14 08:46:39 crc kubenswrapper[5058]: I1014 08:46:39.073173 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798"} err="failed to get container status \"26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798\": rpc error: code = NotFound desc = could not find container \"26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798\": container with ID starting with 26fdd3eea5d68184d5522d9b771c803a4576365ae760dc4193b89b6b07f8e798 not found: ID does not exist" Oct 14 08:46:40 crc kubenswrapper[5058]: I1014 08:46:40.471818 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:40 crc kubenswrapper[5058]: I1014 08:46:40.472226 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:40 crc kubenswrapper[5058]: I1014 08:46:40.804179 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" path="/var/lib/kubelet/pods/50c614fd-dae6-4cbc-bc5b-362c55fcf2b4/volumes" Oct 14 08:46:41 crc kubenswrapper[5058]: I1014 08:46:41.147529 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 14 08:46:41 crc kubenswrapper[5058]: I1014 08:46:41.221867 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 14 08:46:42 crc kubenswrapper[5058]: I1014 08:46:42.278390 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:42 crc kubenswrapper[5058]: I1014 08:46:42.278707 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:42 crc kubenswrapper[5058]: I1014 08:46:42.354676 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:42 crc kubenswrapper[5058]: I1014 08:46:42.533642 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:42 crc kubenswrapper[5058]: I1014 08:46:42.593735 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 14 08:46:43 crc kubenswrapper[5058]: I1014 08:46:43.123924 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell2-galera-0" Oct 14 08:46:43 crc kubenswrapper[5058]: I1014 08:46:43.450235 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:43 crc kubenswrapper[5058]: I1014 08:46:43.450282 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:43 crc kubenswrapper[5058]: I1014 08:46:43.501451 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:44 crc kubenswrapper[5058]: I1014 08:46:44.130401 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell3-galera-0" Oct 14 08:46:52 crc kubenswrapper[5058]: I1014 08:46:52.798875 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:46:52 crc kubenswrapper[5058]: E1014 08:46:52.799918 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:47:02 crc kubenswrapper[5058]: I1014 08:47:02.222161 5058 generic.go:334] "Generic (PLEG): container finished" podID="3001759e-a841-4846-8879-5e8fc14f1f5e" containerID="50d42e27e8a0e06f472eecb1716793ffe602404c9d121855c8890b5e021665b2" exitCode=0 Oct 14 08:47:02 crc kubenswrapper[5058]: I1014 08:47:02.222293 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell3-server-0" event={"ID":"3001759e-a841-4846-8879-5e8fc14f1f5e","Type":"ContainerDied","Data":"50d42e27e8a0e06f472eecb1716793ffe602404c9d121855c8890b5e021665b2"} Oct 14 08:47:02 crc kubenswrapper[5058]: I1014 08:47:02.229832 5058 generic.go:334] "Generic (PLEG): container finished" podID="566005b2-aadc-466a-a1b6-ad8614d9998a" containerID="4f4253ec9724b81aa68d6805539e6565dd94f0b8a24efa7fedf7e102f3031c07" exitCode=0 Oct 14 08:47:02 crc kubenswrapper[5058]: I1014 08:47:02.229926 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"566005b2-aadc-466a-a1b6-ad8614d9998a","Type":"ContainerDied","Data":"4f4253ec9724b81aa68d6805539e6565dd94f0b8a24efa7fedf7e102f3031c07"} Oct 14 08:47:02 crc kubenswrapper[5058]: I1014 08:47:02.233946 5058 generic.go:334] "Generic (PLEG): container finished" podID="552c2732-565b-4a0b-bbc2-64ca8e832310" containerID="57fdc1e4eeb6bff8666046f1a11a05905ea23a5d74ee01638a0e27dc374c34d7" exitCode=0 Oct 14 08:47:02 crc kubenswrapper[5058]: I1014 08:47:02.234050 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell2-server-0" event={"ID":"552c2732-565b-4a0b-bbc2-64ca8e832310","Type":"ContainerDied","Data":"57fdc1e4eeb6bff8666046f1a11a05905ea23a5d74ee01638a0e27dc374c34d7"} Oct 14 08:47:02 crc kubenswrapper[5058]: I1014 08:47:02.243854 5058 generic.go:334] "Generic (PLEG): container finished" podID="2981ba1e-b068-49f1-9d95-77ff66de181b" containerID="3c352f089d49affe1535a0d8522d7a95c738710c8fe5244d10425d96a1310663" exitCode=0 Oct 14 08:47:02 crc kubenswrapper[5058]: I1014 08:47:02.243934 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2981ba1e-b068-49f1-9d95-77ff66de181b","Type":"ContainerDied","Data":"3c352f089d49affe1535a0d8522d7a95c738710c8fe5244d10425d96a1310663"} Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.252454 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell3-server-0" event={"ID":"3001759e-a841-4846-8879-5e8fc14f1f5e","Type":"ContainerStarted","Data":"78608f2542bc31b2fb006f2e8f94d8589c6da0de88502a59761a1715aa2b8eac"} Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.253901 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.255870 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"566005b2-aadc-466a-a1b6-ad8614d9998a","Type":"ContainerStarted","Data":"712343329c0c947466ba1abca909118c28943afe3ec45e5e02a1fca515e38be0"} Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.256391 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.257669 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell2-server-0" event={"ID":"552c2732-565b-4a0b-bbc2-64ca8e832310","Type":"ContainerStarted","Data":"e83f930fc881e14850f53780f0f6544e69b9fa2c430b876f6f4aeb5a7cf27bbe"} Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.258614 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.261693 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2981ba1e-b068-49f1-9d95-77ff66de181b","Type":"ContainerStarted","Data":"05479e65bbcecab3919e8b80a551f2c97c57057fd3ffb3cc9e40622d4f4cab6b"} Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.262682 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.279385 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell3-server-0" podStartSLOduration=39.0259159 podStartE2EDuration="56.279370792s" podCreationTimestamp="2025-10-14 08:46:07 +0000 UTC" firstStartedPulling="2025-10-14 08:46:10.686379158 +0000 UTC m=+7118.597462964" lastFinishedPulling="2025-10-14 08:46:27.93983404 +0000 UTC m=+7135.850917856" observedRunningTime="2025-10-14 08:47:03.2779311 +0000 UTC m=+7171.189014916" watchObservedRunningTime="2025-10-14 08:47:03.279370792 +0000 UTC m=+7171.190454598" Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.301737 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell2-server-0" podStartSLOduration=38.537960968 podStartE2EDuration="56.301718097s" podCreationTimestamp="2025-10-14 08:46:07 +0000 UTC" firstStartedPulling="2025-10-14 08:46:10.081004466 +0000 UTC m=+7117.992088272" lastFinishedPulling="2025-10-14 08:46:27.844761585 +0000 UTC m=+7135.755845401" observedRunningTime="2025-10-14 08:47:03.299029428 +0000 UTC m=+7171.210113254" watchObservedRunningTime="2025-10-14 08:47:03.301718097 +0000 UTC m=+7171.212801903" Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.322387 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.085769261 podStartE2EDuration="57.322367361s" podCreationTimestamp="2025-10-14 08:46:06 +0000 UTC" firstStartedPulling="2025-10-14 08:46:08.703514278 +0000 UTC m=+7116.614598084" lastFinishedPulling="2025-10-14 08:46:27.940112378 +0000 UTC m=+7135.851196184" observedRunningTime="2025-10-14 08:47:03.319855248 +0000 UTC m=+7171.230939064" watchObservedRunningTime="2025-10-14 08:47:03.322367361 +0000 UTC m=+7171.233451167" Oct 14 08:47:03 crc kubenswrapper[5058]: I1014 08:47:03.348612 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.43974235 podStartE2EDuration="56.34859852s" podCreationTimestamp="2025-10-14 08:46:07 +0000 UTC" firstStartedPulling="2025-10-14 08:46:09.012332063 +0000 UTC m=+7116.923415869" lastFinishedPulling="2025-10-14 08:46:27.921188243 +0000 UTC m=+7135.832272039" observedRunningTime="2025-10-14 08:47:03.346677234 +0000 UTC m=+7171.257761040" watchObservedRunningTime="2025-10-14 08:47:03.34859852 +0000 UTC m=+7171.259682326" Oct 14 08:47:05 crc kubenswrapper[5058]: I1014 08:47:05.790493 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:47:05 crc kubenswrapper[5058]: E1014 08:47:05.791207 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:47:17 crc kubenswrapper[5058]: I1014 08:47:17.790434 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:47:17 crc kubenswrapper[5058]: E1014 08:47:17.791588 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:47:18 crc kubenswrapper[5058]: I1014 08:47:18.232768 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 14 08:47:18 crc kubenswrapper[5058]: I1014 08:47:18.484892 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:18 crc kubenswrapper[5058]: I1014 08:47:18.846966 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell2-server-0" Oct 14 08:47:19 crc kubenswrapper[5058]: I1014 08:47:19.503996 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell3-server-0" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.413135 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6867678c-cgbxh"] Oct 14 08:47:26 crc kubenswrapper[5058]: E1014 08:47:26.415036 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9d6e51-442b-4fa5-aef9-dd93d13829b2" containerName="init" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.415165 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9d6e51-442b-4fa5-aef9-dd93d13829b2" containerName="init" Oct 14 08:47:26 crc kubenswrapper[5058]: E1014 08:47:26.415289 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" containerName="init" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.416309 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" containerName="init" Oct 14 08:47:26 crc kubenswrapper[5058]: E1014 08:47:26.416453 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" containerName="dnsmasq-dns" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.416560 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" containerName="dnsmasq-dns" Oct 14 08:47:26 crc kubenswrapper[5058]: E1014 08:47:26.416667 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4270a2b5-97e2-456a-80ce-8aacdac8abc6" containerName="init" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.416787 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4270a2b5-97e2-456a-80ce-8aacdac8abc6" containerName="init" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.417207 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9d6e51-442b-4fa5-aef9-dd93d13829b2" containerName="init" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.417336 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4270a2b5-97e2-456a-80ce-8aacdac8abc6" containerName="init" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.417459 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c614fd-dae6-4cbc-bc5b-362c55fcf2b4" containerName="dnsmasq-dns" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.419172 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.439257 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6867678c-cgbxh"] Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.546961 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-config\") pod \"dnsmasq-dns-6f6867678c-cgbxh\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.547083 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-dns-svc\") pod \"dnsmasq-dns-6f6867678c-cgbxh\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.547193 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gh7r\" (UniqueName: \"kubernetes.io/projected/c4bf9877-0fae-4b0c-b651-8570384d186c-kube-api-access-2gh7r\") pod \"dnsmasq-dns-6f6867678c-cgbxh\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.648544 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gh7r\" (UniqueName: \"kubernetes.io/projected/c4bf9877-0fae-4b0c-b651-8570384d186c-kube-api-access-2gh7r\") pod \"dnsmasq-dns-6f6867678c-cgbxh\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.649052 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-config\") pod \"dnsmasq-dns-6f6867678c-cgbxh\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.649143 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-dns-svc\") pod \"dnsmasq-dns-6f6867678c-cgbxh\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.650289 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-dns-svc\") pod \"dnsmasq-dns-6f6867678c-cgbxh\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.650301 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-config\") pod \"dnsmasq-dns-6f6867678c-cgbxh\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.680206 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gh7r\" (UniqueName: \"kubernetes.io/projected/c4bf9877-0fae-4b0c-b651-8570384d186c-kube-api-access-2gh7r\") pod \"dnsmasq-dns-6f6867678c-cgbxh\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:26 crc kubenswrapper[5058]: I1014 08:47:26.758466 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:27 crc kubenswrapper[5058]: I1014 08:47:27.123705 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 08:47:27 crc kubenswrapper[5058]: I1014 08:47:27.248022 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6867678c-cgbxh"] Oct 14 08:47:27 crc kubenswrapper[5058]: I1014 08:47:27.504061 5058 generic.go:334] "Generic (PLEG): container finished" podID="c4bf9877-0fae-4b0c-b651-8570384d186c" containerID="03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce" exitCode=0 Oct 14 08:47:27 crc kubenswrapper[5058]: I1014 08:47:27.504147 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" event={"ID":"c4bf9877-0fae-4b0c-b651-8570384d186c","Type":"ContainerDied","Data":"03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce"} Oct 14 08:47:27 crc kubenswrapper[5058]: I1014 08:47:27.504356 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" event={"ID":"c4bf9877-0fae-4b0c-b651-8570384d186c","Type":"ContainerStarted","Data":"e4b08f809a9a13f2ab5a5d4a7d030ed29c4b7d37ba3b66f58bafda5114abc35e"} Oct 14 08:47:28 crc kubenswrapper[5058]: I1014 08:47:28.187531 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 08:47:28 crc kubenswrapper[5058]: I1014 08:47:28.512120 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" event={"ID":"c4bf9877-0fae-4b0c-b651-8570384d186c","Type":"ContainerStarted","Data":"547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe"} Oct 14 08:47:28 crc kubenswrapper[5058]: I1014 08:47:28.512472 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:28 crc kubenswrapper[5058]: I1014 08:47:28.537400 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" podStartSLOduration=2.537385226 podStartE2EDuration="2.537385226s" podCreationTimestamp="2025-10-14 08:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:47:28.531005289 +0000 UTC m=+7196.442089095" watchObservedRunningTime="2025-10-14 08:47:28.537385226 +0000 UTC m=+7196.448469032" Oct 14 08:47:29 crc kubenswrapper[5058]: I1014 08:47:29.024620 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="566005b2-aadc-466a-a1b6-ad8614d9998a" containerName="rabbitmq" containerID="cri-o://712343329c0c947466ba1abca909118c28943afe3ec45e5e02a1fca515e38be0" gracePeriod=604799 Oct 14 08:47:29 crc kubenswrapper[5058]: I1014 08:47:29.791394 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:47:29 crc kubenswrapper[5058]: E1014 08:47:29.791759 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:47:30 crc kubenswrapper[5058]: I1014 08:47:30.037638 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2981ba1e-b068-49f1-9d95-77ff66de181b" containerName="rabbitmq" containerID="cri-o://05479e65bbcecab3919e8b80a551f2c97c57057fd3ffb3cc9e40622d4f4cab6b" gracePeriod=604799 Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.587580 5058 generic.go:334] "Generic (PLEG): container finished" podID="566005b2-aadc-466a-a1b6-ad8614d9998a" containerID="712343329c0c947466ba1abca909118c28943afe3ec45e5e02a1fca515e38be0" exitCode=0 Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.587634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"566005b2-aadc-466a-a1b6-ad8614d9998a","Type":"ContainerDied","Data":"712343329c0c947466ba1abca909118c28943afe3ec45e5e02a1fca515e38be0"} Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.587884 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"566005b2-aadc-466a-a1b6-ad8614d9998a","Type":"ContainerDied","Data":"a31af6cda1c2b694c9151e1bc4f03776799c3e8bfea7e0ea56b5373b961de631"} Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.587901 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a31af6cda1c2b694c9151e1bc4f03776799c3e8bfea7e0ea56b5373b961de631" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.615459 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.708673 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-plugins-conf\") pod \"566005b2-aadc-466a-a1b6-ad8614d9998a\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.708754 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-erlang-cookie\") pod \"566005b2-aadc-466a-a1b6-ad8614d9998a\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.708805 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/566005b2-aadc-466a-a1b6-ad8614d9998a-pod-info\") pod \"566005b2-aadc-466a-a1b6-ad8614d9998a\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.708890 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-server-conf\") pod \"566005b2-aadc-466a-a1b6-ad8614d9998a\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.709103 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") pod \"566005b2-aadc-466a-a1b6-ad8614d9998a\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.709162 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/566005b2-aadc-466a-a1b6-ad8614d9998a-erlang-cookie-secret\") pod \"566005b2-aadc-466a-a1b6-ad8614d9998a\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.709197 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-confd\") pod \"566005b2-aadc-466a-a1b6-ad8614d9998a\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.710770 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "566005b2-aadc-466a-a1b6-ad8614d9998a" (UID: "566005b2-aadc-466a-a1b6-ad8614d9998a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.710894 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "566005b2-aadc-466a-a1b6-ad8614d9998a" (UID: "566005b2-aadc-466a-a1b6-ad8614d9998a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.711193 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-plugins\") pod \"566005b2-aadc-466a-a1b6-ad8614d9998a\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.711236 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hxhc\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-kube-api-access-5hxhc\") pod \"566005b2-aadc-466a-a1b6-ad8614d9998a\" (UID: \"566005b2-aadc-466a-a1b6-ad8614d9998a\") " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.711711 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.711728 5058 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.711982 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "566005b2-aadc-466a-a1b6-ad8614d9998a" (UID: "566005b2-aadc-466a-a1b6-ad8614d9998a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.715915 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/566005b2-aadc-466a-a1b6-ad8614d9998a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "566005b2-aadc-466a-a1b6-ad8614d9998a" (UID: "566005b2-aadc-466a-a1b6-ad8614d9998a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.717124 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/566005b2-aadc-466a-a1b6-ad8614d9998a-pod-info" (OuterVolumeSpecName: "pod-info") pod "566005b2-aadc-466a-a1b6-ad8614d9998a" (UID: "566005b2-aadc-466a-a1b6-ad8614d9998a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.717469 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-kube-api-access-5hxhc" (OuterVolumeSpecName: "kube-api-access-5hxhc") pod "566005b2-aadc-466a-a1b6-ad8614d9998a" (UID: "566005b2-aadc-466a-a1b6-ad8614d9998a"). InnerVolumeSpecName "kube-api-access-5hxhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.726479 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8" (OuterVolumeSpecName: "persistence") pod "566005b2-aadc-466a-a1b6-ad8614d9998a" (UID: "566005b2-aadc-466a-a1b6-ad8614d9998a"). InnerVolumeSpecName "pvc-98134597-0fcf-48cd-8c47-7e40359782d8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.739312 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-server-conf" (OuterVolumeSpecName: "server-conf") pod "566005b2-aadc-466a-a1b6-ad8614d9998a" (UID: "566005b2-aadc-466a-a1b6-ad8614d9998a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.811106 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "566005b2-aadc-466a-a1b6-ad8614d9998a" (UID: "566005b2-aadc-466a-a1b6-ad8614d9998a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.813235 5058 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/566005b2-aadc-466a-a1b6-ad8614d9998a-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.813278 5058 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/566005b2-aadc-466a-a1b6-ad8614d9998a-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.813334 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") on node \"crc\" " Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.813359 5058 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/566005b2-aadc-466a-a1b6-ad8614d9998a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.813376 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.813393 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/566005b2-aadc-466a-a1b6-ad8614d9998a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.813409 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hxhc\" (UniqueName: \"kubernetes.io/projected/566005b2-aadc-466a-a1b6-ad8614d9998a-kube-api-access-5hxhc\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.843119 5058 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.843366 5058 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-98134597-0fcf-48cd-8c47-7e40359782d8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8") on node "crc" Oct 14 08:47:35 crc kubenswrapper[5058]: I1014 08:47:35.914432 5058 reconciler_common.go:293] "Volume detached for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.600604 5058 generic.go:334] "Generic (PLEG): container finished" podID="2981ba1e-b068-49f1-9d95-77ff66de181b" containerID="05479e65bbcecab3919e8b80a551f2c97c57057fd3ffb3cc9e40622d4f4cab6b" exitCode=0 Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.600667 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2981ba1e-b068-49f1-9d95-77ff66de181b","Type":"ContainerDied","Data":"05479e65bbcecab3919e8b80a551f2c97c57057fd3ffb3cc9e40622d4f4cab6b"} Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.601684 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.661587 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.672259 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.695606 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.707091 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 08:47:36 crc kubenswrapper[5058]: E1014 08:47:36.707812 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="566005b2-aadc-466a-a1b6-ad8614d9998a" containerName="rabbitmq" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.708108 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="566005b2-aadc-466a-a1b6-ad8614d9998a" containerName="rabbitmq" Oct 14 08:47:36 crc kubenswrapper[5058]: E1014 08:47:36.708690 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2981ba1e-b068-49f1-9d95-77ff66de181b" containerName="rabbitmq" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.708872 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2981ba1e-b068-49f1-9d95-77ff66de181b" containerName="rabbitmq" Oct 14 08:47:36 crc kubenswrapper[5058]: E1014 08:47:36.708991 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2981ba1e-b068-49f1-9d95-77ff66de181b" containerName="setup-container" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.709106 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2981ba1e-b068-49f1-9d95-77ff66de181b" containerName="setup-container" Oct 14 08:47:36 crc kubenswrapper[5058]: E1014 08:47:36.709239 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="566005b2-aadc-466a-a1b6-ad8614d9998a" containerName="setup-container" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.709337 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="566005b2-aadc-466a-a1b6-ad8614d9998a" containerName="setup-container" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.709718 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="566005b2-aadc-466a-a1b6-ad8614d9998a" containerName="rabbitmq" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.709867 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2981ba1e-b068-49f1-9d95-77ff66de181b" containerName="rabbitmq" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.711269 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.717600 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.718183 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.718566 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.718716 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hts89" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.718909 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.723938 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.726486 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq7tb\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-kube-api-access-cq7tb\") pod \"2981ba1e-b068-49f1-9d95-77ff66de181b\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.726587 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-plugins\") pod \"2981ba1e-b068-49f1-9d95-77ff66de181b\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.726663 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-plugins-conf\") pod \"2981ba1e-b068-49f1-9d95-77ff66de181b\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.726720 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-erlang-cookie\") pod \"2981ba1e-b068-49f1-9d95-77ff66de181b\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.726859 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2981ba1e-b068-49f1-9d95-77ff66de181b-pod-info\") pod \"2981ba1e-b068-49f1-9d95-77ff66de181b\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.726892 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-server-conf\") pod \"2981ba1e-b068-49f1-9d95-77ff66de181b\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.726935 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2981ba1e-b068-49f1-9d95-77ff66de181b-erlang-cookie-secret\") pod \"2981ba1e-b068-49f1-9d95-77ff66de181b\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.727185 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") pod \"2981ba1e-b068-49f1-9d95-77ff66de181b\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.727527 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2981ba1e-b068-49f1-9d95-77ff66de181b" (UID: "2981ba1e-b068-49f1-9d95-77ff66de181b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.728122 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2981ba1e-b068-49f1-9d95-77ff66de181b" (UID: "2981ba1e-b068-49f1-9d95-77ff66de181b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.731852 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2981ba1e-b068-49f1-9d95-77ff66de181b" (UID: "2981ba1e-b068-49f1-9d95-77ff66de181b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.732287 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-kube-api-access-cq7tb" (OuterVolumeSpecName: "kube-api-access-cq7tb") pod "2981ba1e-b068-49f1-9d95-77ff66de181b" (UID: "2981ba1e-b068-49f1-9d95-77ff66de181b"). InnerVolumeSpecName "kube-api-access-cq7tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.732658 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-confd\") pod \"2981ba1e-b068-49f1-9d95-77ff66de181b\" (UID: \"2981ba1e-b068-49f1-9d95-77ff66de181b\") " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.732924 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2981ba1e-b068-49f1-9d95-77ff66de181b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2981ba1e-b068-49f1-9d95-77ff66de181b" (UID: "2981ba1e-b068-49f1-9d95-77ff66de181b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.733376 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq7tb\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-kube-api-access-cq7tb\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.733398 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.733408 5058 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.733419 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.733428 5058 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2981ba1e-b068-49f1-9d95-77ff66de181b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.741058 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2981ba1e-b068-49f1-9d95-77ff66de181b-pod-info" (OuterVolumeSpecName: "pod-info") pod "2981ba1e-b068-49f1-9d95-77ff66de181b" (UID: "2981ba1e-b068-49f1-9d95-77ff66de181b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.756959 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384" (OuterVolumeSpecName: "persistence") pod "2981ba1e-b068-49f1-9d95-77ff66de181b" (UID: "2981ba1e-b068-49f1-9d95-77ff66de181b"). InnerVolumeSpecName "pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.761011 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.764627 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-server-conf" (OuterVolumeSpecName: "server-conf") pod "2981ba1e-b068-49f1-9d95-77ff66de181b" (UID: "2981ba1e-b068-49f1-9d95-77ff66de181b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.812264 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="566005b2-aadc-466a-a1b6-ad8614d9998a" path="/var/lib/kubelet/pods/566005b2-aadc-466a-a1b6-ad8614d9998a/volumes" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.822643 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd4cb54d5-s7fg5"] Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.822948 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" podUID="91309438-7cfa-47a2-ad04-5d9a326bd940" containerName="dnsmasq-dns" containerID="cri-o://4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789" gracePeriod=10 Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835016 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf35a4e3-bdb0-4a3e-802c-46f70875a887-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835102 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835126 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbpq5\" (UniqueName: \"kubernetes.io/projected/cf35a4e3-bdb0-4a3e-802c-46f70875a887-kube-api-access-wbpq5\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835150 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf35a4e3-bdb0-4a3e-802c-46f70875a887-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835192 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf35a4e3-bdb0-4a3e-802c-46f70875a887-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835241 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf35a4e3-bdb0-4a3e-802c-46f70875a887-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835263 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf35a4e3-bdb0-4a3e-802c-46f70875a887-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835304 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf35a4e3-bdb0-4a3e-802c-46f70875a887-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835345 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf35a4e3-bdb0-4a3e-802c-46f70875a887-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835392 5058 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2981ba1e-b068-49f1-9d95-77ff66de181b-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835406 5058 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2981ba1e-b068-49f1-9d95-77ff66de181b-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.835425 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") on node \"crc\" " Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.856598 5058 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.856765 5058 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384") on node "crc" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.868363 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2981ba1e-b068-49f1-9d95-77ff66de181b" (UID: "2981ba1e-b068-49f1-9d95-77ff66de181b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937040 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937086 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbpq5\" (UniqueName: \"kubernetes.io/projected/cf35a4e3-bdb0-4a3e-802c-46f70875a887-kube-api-access-wbpq5\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937108 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf35a4e3-bdb0-4a3e-802c-46f70875a887-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937138 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf35a4e3-bdb0-4a3e-802c-46f70875a887-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937187 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf35a4e3-bdb0-4a3e-802c-46f70875a887-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937210 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf35a4e3-bdb0-4a3e-802c-46f70875a887-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937238 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf35a4e3-bdb0-4a3e-802c-46f70875a887-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937277 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf35a4e3-bdb0-4a3e-802c-46f70875a887-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937314 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf35a4e3-bdb0-4a3e-802c-46f70875a887-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937367 5058 reconciler_common.go:293] "Volume detached for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.937380 5058 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2981ba1e-b068-49f1-9d95-77ff66de181b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.939064 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf35a4e3-bdb0-4a3e-802c-46f70875a887-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.939158 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.939189 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf35a4e3-bdb0-4a3e-802c-46f70875a887-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.939195 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b04d1845d37e5dca694210cfb352c5b6bd7172e5b051bdad229e398ff013847a/globalmount\"" pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.939316 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf35a4e3-bdb0-4a3e-802c-46f70875a887-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.939531 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf35a4e3-bdb0-4a3e-802c-46f70875a887-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.941563 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf35a4e3-bdb0-4a3e-802c-46f70875a887-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.941953 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf35a4e3-bdb0-4a3e-802c-46f70875a887-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.944911 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf35a4e3-bdb0-4a3e-802c-46f70875a887-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.959056 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbpq5\" (UniqueName: \"kubernetes.io/projected/cf35a4e3-bdb0-4a3e-802c-46f70875a887-kube-api-access-wbpq5\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:36 crc kubenswrapper[5058]: I1014 08:47:36.965731 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-98134597-0fcf-48cd-8c47-7e40359782d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-98134597-0fcf-48cd-8c47-7e40359782d8\") pod \"rabbitmq-server-0\" (UID: \"cf35a4e3-bdb0-4a3e-802c-46f70875a887\") " pod="openstack/rabbitmq-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.035811 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.388113 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.442639 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvt5z\" (UniqueName: \"kubernetes.io/projected/91309438-7cfa-47a2-ad04-5d9a326bd940-kube-api-access-pvt5z\") pod \"91309438-7cfa-47a2-ad04-5d9a326bd940\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.442851 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-config\") pod \"91309438-7cfa-47a2-ad04-5d9a326bd940\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.442891 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-dns-svc\") pod \"91309438-7cfa-47a2-ad04-5d9a326bd940\" (UID: \"91309438-7cfa-47a2-ad04-5d9a326bd940\") " Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.448183 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91309438-7cfa-47a2-ad04-5d9a326bd940-kube-api-access-pvt5z" (OuterVolumeSpecName: "kube-api-access-pvt5z") pod "91309438-7cfa-47a2-ad04-5d9a326bd940" (UID: "91309438-7cfa-47a2-ad04-5d9a326bd940"). InnerVolumeSpecName "kube-api-access-pvt5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.481282 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-config" (OuterVolumeSpecName: "config") pod "91309438-7cfa-47a2-ad04-5d9a326bd940" (UID: "91309438-7cfa-47a2-ad04-5d9a326bd940"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.488707 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91309438-7cfa-47a2-ad04-5d9a326bd940" (UID: "91309438-7cfa-47a2-ad04-5d9a326bd940"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.544814 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-config\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.544852 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91309438-7cfa-47a2-ad04-5d9a326bd940-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.544865 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvt5z\" (UniqueName: \"kubernetes.io/projected/91309438-7cfa-47a2-ad04-5d9a326bd940-kube-api-access-pvt5z\") on node \"crc\" DevicePath \"\"" Oct 14 08:47:37 crc kubenswrapper[5058]: W1014 08:47:37.570623 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf35a4e3_bdb0_4a3e_802c_46f70875a887.slice/crio-a9f41b4beea46563fd520580435b0b2b088306a1b446aebdfec36ef8d28f4c27 WatchSource:0}: Error finding container a9f41b4beea46563fd520580435b0b2b088306a1b446aebdfec36ef8d28f4c27: Status 404 returned error can't find the container with id a9f41b4beea46563fd520580435b0b2b088306a1b446aebdfec36ef8d28f4c27 Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.572886 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.613568 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2981ba1e-b068-49f1-9d95-77ff66de181b","Type":"ContainerDied","Data":"7a5a23f9e84997a0acbedf281deba5e44e61d1266b637bb8a4611da512a9e3d3"} Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.613634 5058 scope.go:117] "RemoveContainer" containerID="05479e65bbcecab3919e8b80a551f2c97c57057fd3ffb3cc9e40622d4f4cab6b" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.613670 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.617769 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf35a4e3-bdb0-4a3e-802c-46f70875a887","Type":"ContainerStarted","Data":"a9f41b4beea46563fd520580435b0b2b088306a1b446aebdfec36ef8d28f4c27"} Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.620527 5058 generic.go:334] "Generic (PLEG): container finished" podID="91309438-7cfa-47a2-ad04-5d9a326bd940" containerID="4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789" exitCode=0 Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.620570 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" event={"ID":"91309438-7cfa-47a2-ad04-5d9a326bd940","Type":"ContainerDied","Data":"4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789"} Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.620597 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" event={"ID":"91309438-7cfa-47a2-ad04-5d9a326bd940","Type":"ContainerDied","Data":"9334eba3f4398e673b336bc920bcad74ae1dbcf2adb0530b45a15391537e5feb"} Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.620660 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd4cb54d5-s7fg5" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.642155 5058 scope.go:117] "RemoveContainer" containerID="3c352f089d49affe1535a0d8522d7a95c738710c8fe5244d10425d96a1310663" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.658634 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd4cb54d5-s7fg5"] Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.675196 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dd4cb54d5-s7fg5"] Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.686712 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.687360 5058 scope.go:117] "RemoveContainer" containerID="4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.694946 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.711386 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 08:47:37 crc kubenswrapper[5058]: E1014 08:47:37.711660 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91309438-7cfa-47a2-ad04-5d9a326bd940" containerName="dnsmasq-dns" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.711676 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="91309438-7cfa-47a2-ad04-5d9a326bd940" containerName="dnsmasq-dns" Oct 14 08:47:37 crc kubenswrapper[5058]: E1014 08:47:37.711692 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91309438-7cfa-47a2-ad04-5d9a326bd940" containerName="init" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.711698 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="91309438-7cfa-47a2-ad04-5d9a326bd940" containerName="init" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.711870 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="91309438-7cfa-47a2-ad04-5d9a326bd940" containerName="dnsmasq-dns" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.713698 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.715644 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.715917 5058 scope.go:117] "RemoveContainer" containerID="44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.716054 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.716190 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6qlnn" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.716209 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.728038 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.729318 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.744349 5058 scope.go:117] "RemoveContainer" containerID="4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789" Oct 14 08:47:37 crc kubenswrapper[5058]: E1014 08:47:37.747278 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789\": container with ID starting with 4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789 not found: ID does not exist" containerID="4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.747326 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789"} err="failed to get container status \"4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789\": rpc error: code = NotFound desc = could not find container \"4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789\": container with ID starting with 4c4f07caec9a3570ee7b9b65c32e51776b87e838a1e00d67b4bfc6a8c395b789 not found: ID does not exist" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.747353 5058 scope.go:117] "RemoveContainer" containerID="44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3" Oct 14 08:47:37 crc kubenswrapper[5058]: E1014 08:47:37.748993 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3\": container with ID starting with 44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3 not found: ID does not exist" containerID="44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.749038 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3"} err="failed to get container status \"44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3\": rpc error: code = NotFound desc = could not find container \"44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3\": container with ID starting with 44493096265f7e5b33daed5150b666ab716a29bc983c77f8f1072b97473396e3 not found: ID does not exist" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.854060 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.854396 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s4dn\" (UniqueName: \"kubernetes.io/projected/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-kube-api-access-5s4dn\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.854571 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.854723 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.854965 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.855154 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.855344 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.855469 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.855516 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.957518 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.957879 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.958061 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.958156 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s4dn\" (UniqueName: \"kubernetes.io/projected/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-kube-api-access-5s4dn\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.958285 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.958925 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.958765 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.959147 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.959244 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.959330 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.960319 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.961216 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.961249 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.961955 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.961970 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/929b0ca848490a0e7ae29651f7670beeca843a85a67f644a00cd5fbdad3ded20/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.962566 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.962575 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.964323 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:37 crc kubenswrapper[5058]: I1014 08:47:37.976511 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s4dn\" (UniqueName: \"kubernetes.io/projected/c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2-kube-api-access-5s4dn\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:38 crc kubenswrapper[5058]: I1014 08:47:38.004643 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f083f6ad-2cb5-4a37-a6e9-0a730b69c384\") pod \"rabbitmq-cell1-server-0\" (UID: \"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:38 crc kubenswrapper[5058]: I1014 08:47:38.047987 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:47:38 crc kubenswrapper[5058]: I1014 08:47:38.402219 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 08:47:38 crc kubenswrapper[5058]: I1014 08:47:38.638278 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2","Type":"ContainerStarted","Data":"bd10339771c621b75f256fae64c6912db9e6f4ec7d5b66c675c4fde75b40b0a9"} Oct 14 08:47:38 crc kubenswrapper[5058]: I1014 08:47:38.808541 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2981ba1e-b068-49f1-9d95-77ff66de181b" path="/var/lib/kubelet/pods/2981ba1e-b068-49f1-9d95-77ff66de181b/volumes" Oct 14 08:47:38 crc kubenswrapper[5058]: I1014 08:47:38.810396 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91309438-7cfa-47a2-ad04-5d9a326bd940" path="/var/lib/kubelet/pods/91309438-7cfa-47a2-ad04-5d9a326bd940/volumes" Oct 14 08:47:39 crc kubenswrapper[5058]: I1014 08:47:39.658206 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf35a4e3-bdb0-4a3e-802c-46f70875a887","Type":"ContainerStarted","Data":"eede800cebe3c810ed1d1b9dc4cceeb2022d97948c7517794844ebc2a6c10042"} Oct 14 08:47:40 crc kubenswrapper[5058]: I1014 08:47:40.674611 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2","Type":"ContainerStarted","Data":"5f0d82a47d94fc0e93fa84aac78d3a4f5e3b10343c33d40cacb898df4ab7c204"} Oct 14 08:47:44 crc kubenswrapper[5058]: I1014 08:47:44.790421 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:47:44 crc kubenswrapper[5058]: E1014 08:47:44.791891 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:47:59 crc kubenswrapper[5058]: I1014 08:47:59.790955 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:47:59 crc kubenswrapper[5058]: E1014 08:47:59.792324 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:48:11 crc kubenswrapper[5058]: I1014 08:48:11.789620 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:48:11 crc kubenswrapper[5058]: E1014 08:48:11.790459 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:48:13 crc kubenswrapper[5058]: I1014 08:48:13.001788 5058 generic.go:334] "Generic (PLEG): container finished" podID="cf35a4e3-bdb0-4a3e-802c-46f70875a887" containerID="eede800cebe3c810ed1d1b9dc4cceeb2022d97948c7517794844ebc2a6c10042" exitCode=0 Oct 14 08:48:13 crc kubenswrapper[5058]: I1014 08:48:13.001857 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf35a4e3-bdb0-4a3e-802c-46f70875a887","Type":"ContainerDied","Data":"eede800cebe3c810ed1d1b9dc4cceeb2022d97948c7517794844ebc2a6c10042"} Oct 14 08:48:14 crc kubenswrapper[5058]: I1014 08:48:14.013955 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf35a4e3-bdb0-4a3e-802c-46f70875a887","Type":"ContainerStarted","Data":"4f576d8df79b68011ec58df403d1dc8ea3c28a11cde5b0cc3658f6d74543b8e2"} Oct 14 08:48:14 crc kubenswrapper[5058]: I1014 08:48:14.014432 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 14 08:48:14 crc kubenswrapper[5058]: I1014 08:48:14.016461 5058 generic.go:334] "Generic (PLEG): container finished" podID="c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2" containerID="5f0d82a47d94fc0e93fa84aac78d3a4f5e3b10343c33d40cacb898df4ab7c204" exitCode=0 Oct 14 08:48:14 crc kubenswrapper[5058]: I1014 08:48:14.016506 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2","Type":"ContainerDied","Data":"5f0d82a47d94fc0e93fa84aac78d3a4f5e3b10343c33d40cacb898df4ab7c204"} Oct 14 08:48:14 crc kubenswrapper[5058]: I1014 08:48:14.059651 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.059631125 podStartE2EDuration="38.059631125s" podCreationTimestamp="2025-10-14 08:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:48:14.052394093 +0000 UTC m=+7241.963477899" watchObservedRunningTime="2025-10-14 08:48:14.059631125 +0000 UTC m=+7241.970714931" Oct 14 08:48:15 crc kubenswrapper[5058]: I1014 08:48:15.026012 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2","Type":"ContainerStarted","Data":"b640723ac85630987126f49f57b6c7d4f1b87e96c17fdde2f32423c6ef3c7a6e"} Oct 14 08:48:15 crc kubenswrapper[5058]: I1014 08:48:15.026791 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:48:15 crc kubenswrapper[5058]: I1014 08:48:15.052255 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.052237299 podStartE2EDuration="38.052237299s" podCreationTimestamp="2025-10-14 08:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:48:15.048222652 +0000 UTC m=+7242.959306488" watchObservedRunningTime="2025-10-14 08:48:15.052237299 +0000 UTC m=+7242.963321105" Oct 14 08:48:25 crc kubenswrapper[5058]: I1014 08:48:25.790083 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:48:25 crc kubenswrapper[5058]: E1014 08:48:25.791294 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:48:27 crc kubenswrapper[5058]: I1014 08:48:27.038927 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 14 08:48:28 crc kubenswrapper[5058]: I1014 08:48:28.051214 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 14 08:48:29 crc kubenswrapper[5058]: I1014 08:48:29.863138 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Oct 14 08:48:29 crc kubenswrapper[5058]: I1014 08:48:29.864984 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 14 08:48:29 crc kubenswrapper[5058]: I1014 08:48:29.868645 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vjsll" Oct 14 08:48:29 crc kubenswrapper[5058]: I1014 08:48:29.884174 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 14 08:48:30 crc kubenswrapper[5058]: I1014 08:48:30.014082 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjc2l\" (UniqueName: \"kubernetes.io/projected/6654cf93-a7c0-43ea-8b45-eea3cfd15dfa-kube-api-access-hjc2l\") pod \"mariadb-client-1-default\" (UID: \"6654cf93-a7c0-43ea-8b45-eea3cfd15dfa\") " pod="openstack/mariadb-client-1-default" Oct 14 08:48:30 crc kubenswrapper[5058]: I1014 08:48:30.116636 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjc2l\" (UniqueName: \"kubernetes.io/projected/6654cf93-a7c0-43ea-8b45-eea3cfd15dfa-kube-api-access-hjc2l\") pod \"mariadb-client-1-default\" (UID: \"6654cf93-a7c0-43ea-8b45-eea3cfd15dfa\") " pod="openstack/mariadb-client-1-default" Oct 14 08:48:30 crc kubenswrapper[5058]: I1014 08:48:30.152314 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjc2l\" (UniqueName: \"kubernetes.io/projected/6654cf93-a7c0-43ea-8b45-eea3cfd15dfa-kube-api-access-hjc2l\") pod \"mariadb-client-1-default\" (UID: \"6654cf93-a7c0-43ea-8b45-eea3cfd15dfa\") " pod="openstack/mariadb-client-1-default" Oct 14 08:48:30 crc kubenswrapper[5058]: I1014 08:48:30.194358 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 14 08:48:30 crc kubenswrapper[5058]: I1014 08:48:30.873626 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 14 08:48:30 crc kubenswrapper[5058]: W1014 08:48:30.884831 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6654cf93_a7c0_43ea_8b45_eea3cfd15dfa.slice/crio-4a6e22995060d883bb9f17626b34b349e015b08557d459ddb127f8ee193b330c WatchSource:0}: Error finding container 4a6e22995060d883bb9f17626b34b349e015b08557d459ddb127f8ee193b330c: Status 404 returned error can't find the container with id 4a6e22995060d883bb9f17626b34b349e015b08557d459ddb127f8ee193b330c Oct 14 08:48:31 crc kubenswrapper[5058]: I1014 08:48:31.193622 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"6654cf93-a7c0-43ea-8b45-eea3cfd15dfa","Type":"ContainerStarted","Data":"4a6e22995060d883bb9f17626b34b349e015b08557d459ddb127f8ee193b330c"} Oct 14 08:48:32 crc kubenswrapper[5058]: I1014 08:48:32.206663 5058 generic.go:334] "Generic (PLEG): container finished" podID="6654cf93-a7c0-43ea-8b45-eea3cfd15dfa" containerID="d9cdac61f8e80785ea81abdb07828d13d775dc514928241bd1f5e76b0c7ee159" exitCode=0 Oct 14 08:48:32 crc kubenswrapper[5058]: I1014 08:48:32.207079 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"6654cf93-a7c0-43ea-8b45-eea3cfd15dfa","Type":"ContainerDied","Data":"d9cdac61f8e80785ea81abdb07828d13d775dc514928241bd1f5e76b0c7ee159"} Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.670703 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.699625 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_6654cf93-a7c0-43ea-8b45-eea3cfd15dfa/mariadb-client-1-default/0.log" Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.731098 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.738140 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.782516 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjc2l\" (UniqueName: \"kubernetes.io/projected/6654cf93-a7c0-43ea-8b45-eea3cfd15dfa-kube-api-access-hjc2l\") pod \"6654cf93-a7c0-43ea-8b45-eea3cfd15dfa\" (UID: \"6654cf93-a7c0-43ea-8b45-eea3cfd15dfa\") " Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.790558 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6654cf93-a7c0-43ea-8b45-eea3cfd15dfa-kube-api-access-hjc2l" (OuterVolumeSpecName: "kube-api-access-hjc2l") pod "6654cf93-a7c0-43ea-8b45-eea3cfd15dfa" (UID: "6654cf93-a7c0-43ea-8b45-eea3cfd15dfa"). InnerVolumeSpecName "kube-api-access-hjc2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.878920 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-cell1"] Oct 14 08:48:33 crc kubenswrapper[5058]: E1014 08:48:33.880137 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6654cf93-a7c0-43ea-8b45-eea3cfd15dfa" containerName="mariadb-client-1-default" Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.880320 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6654cf93-a7c0-43ea-8b45-eea3cfd15dfa" containerName="mariadb-client-1-default" Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.880860 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6654cf93-a7c0-43ea-8b45-eea3cfd15dfa" containerName="mariadb-client-1-default" Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.881632 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-cell1" Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.884916 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjc2l\" (UniqueName: \"kubernetes.io/projected/6654cf93-a7c0-43ea-8b45-eea3cfd15dfa-kube-api-access-hjc2l\") on node \"crc\" DevicePath \"\"" Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.897632 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-cell1"] Oct 14 08:48:33 crc kubenswrapper[5058]: I1014 08:48:33.986657 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsv5\" (UniqueName: \"kubernetes.io/projected/aad4c859-6ffd-4607-9bb1-6835d5b0ef8a-kube-api-access-rcsv5\") pod \"mariadb-client-1-cell1\" (UID: \"aad4c859-6ffd-4607-9bb1-6835d5b0ef8a\") " pod="openstack/mariadb-client-1-cell1" Oct 14 08:48:34 crc kubenswrapper[5058]: I1014 08:48:34.088970 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsv5\" (UniqueName: \"kubernetes.io/projected/aad4c859-6ffd-4607-9bb1-6835d5b0ef8a-kube-api-access-rcsv5\") pod \"mariadb-client-1-cell1\" (UID: \"aad4c859-6ffd-4607-9bb1-6835d5b0ef8a\") " pod="openstack/mariadb-client-1-cell1" Oct 14 08:48:34 crc kubenswrapper[5058]: I1014 08:48:34.117963 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsv5\" (UniqueName: \"kubernetes.io/projected/aad4c859-6ffd-4607-9bb1-6835d5b0ef8a-kube-api-access-rcsv5\") pod \"mariadb-client-1-cell1\" (UID: \"aad4c859-6ffd-4607-9bb1-6835d5b0ef8a\") " pod="openstack/mariadb-client-1-cell1" Oct 14 08:48:34 crc kubenswrapper[5058]: I1014 08:48:34.209427 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-cell1" Oct 14 08:48:34 crc kubenswrapper[5058]: I1014 08:48:34.230497 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a6e22995060d883bb9f17626b34b349e015b08557d459ddb127f8ee193b330c" Oct 14 08:48:34 crc kubenswrapper[5058]: I1014 08:48:34.230595 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 14 08:48:34 crc kubenswrapper[5058]: I1014 08:48:34.624724 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-cell1"] Oct 14 08:48:34 crc kubenswrapper[5058]: W1014 08:48:34.624970 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaad4c859_6ffd_4607_9bb1_6835d5b0ef8a.slice/crio-f48636fa386a07c8057c0a13a2f2848d2f080a502481f98c22220bfe46db3ddc WatchSource:0}: Error finding container f48636fa386a07c8057c0a13a2f2848d2f080a502481f98c22220bfe46db3ddc: Status 404 returned error can't find the container with id f48636fa386a07c8057c0a13a2f2848d2f080a502481f98c22220bfe46db3ddc Oct 14 08:48:34 crc kubenswrapper[5058]: I1014 08:48:34.801919 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6654cf93-a7c0-43ea-8b45-eea3cfd15dfa" path="/var/lib/kubelet/pods/6654cf93-a7c0-43ea-8b45-eea3cfd15dfa/volumes" Oct 14 08:48:35 crc kubenswrapper[5058]: I1014 08:48:35.243387 5058 generic.go:334] "Generic (PLEG): container finished" podID="aad4c859-6ffd-4607-9bb1-6835d5b0ef8a" containerID="27d9fdf1a5f38ac95dc58783021917355575ea2bec4028fb9ca176133f47dc36" exitCode=0 Oct 14 08:48:35 crc kubenswrapper[5058]: I1014 08:48:35.243441 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-cell1" event={"ID":"aad4c859-6ffd-4607-9bb1-6835d5b0ef8a","Type":"ContainerDied","Data":"27d9fdf1a5f38ac95dc58783021917355575ea2bec4028fb9ca176133f47dc36"} Oct 14 08:48:35 crc kubenswrapper[5058]: I1014 08:48:35.243472 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-cell1" event={"ID":"aad4c859-6ffd-4607-9bb1-6835d5b0ef8a","Type":"ContainerStarted","Data":"f48636fa386a07c8057c0a13a2f2848d2f080a502481f98c22220bfe46db3ddc"} Oct 14 08:48:36 crc kubenswrapper[5058]: I1014 08:48:36.733896 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-cell1" Oct 14 08:48:36 crc kubenswrapper[5058]: I1014 08:48:36.758418 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-cell1_aad4c859-6ffd-4607-9bb1-6835d5b0ef8a/mariadb-client-1-cell1/0.log" Oct 14 08:48:36 crc kubenswrapper[5058]: I1014 08:48:36.825203 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-cell1"] Oct 14 08:48:36 crc kubenswrapper[5058]: I1014 08:48:36.834937 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsv5\" (UniqueName: \"kubernetes.io/projected/aad4c859-6ffd-4607-9bb1-6835d5b0ef8a-kube-api-access-rcsv5\") pod \"aad4c859-6ffd-4607-9bb1-6835d5b0ef8a\" (UID: \"aad4c859-6ffd-4607-9bb1-6835d5b0ef8a\") " Oct 14 08:48:36 crc kubenswrapper[5058]: I1014 08:48:36.834942 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-cell1"] Oct 14 08:48:36 crc kubenswrapper[5058]: I1014 08:48:36.841861 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad4c859-6ffd-4607-9bb1-6835d5b0ef8a-kube-api-access-rcsv5" (OuterVolumeSpecName: "kube-api-access-rcsv5") pod "aad4c859-6ffd-4607-9bb1-6835d5b0ef8a" (UID: "aad4c859-6ffd-4607-9bb1-6835d5b0ef8a"). InnerVolumeSpecName "kube-api-access-rcsv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:48:36 crc kubenswrapper[5058]: I1014 08:48:36.936690 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsv5\" (UniqueName: \"kubernetes.io/projected/aad4c859-6ffd-4607-9bb1-6835d5b0ef8a-kube-api-access-rcsv5\") on node \"crc\" DevicePath \"\"" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.014817 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-cell2"] Oct 14 08:48:37 crc kubenswrapper[5058]: E1014 08:48:37.016510 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad4c859-6ffd-4607-9bb1-6835d5b0ef8a" containerName="mariadb-client-1-cell1" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.016550 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad4c859-6ffd-4607-9bb1-6835d5b0ef8a" containerName="mariadb-client-1-cell1" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.018084 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad4c859-6ffd-4607-9bb1-6835d5b0ef8a" containerName="mariadb-client-1-cell1" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.018766 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-cell2" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.028098 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-cell2"] Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.140991 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x985t\" (UniqueName: \"kubernetes.io/projected/77045173-7140-420b-8cdf-ee7c884406f7-kube-api-access-x985t\") pod \"mariadb-client-1-cell2\" (UID: \"77045173-7140-420b-8cdf-ee7c884406f7\") " pod="openstack/mariadb-client-1-cell2" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.242884 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x985t\" (UniqueName: \"kubernetes.io/projected/77045173-7140-420b-8cdf-ee7c884406f7-kube-api-access-x985t\") pod \"mariadb-client-1-cell2\" (UID: \"77045173-7140-420b-8cdf-ee7c884406f7\") " pod="openstack/mariadb-client-1-cell2" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.265922 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f48636fa386a07c8057c0a13a2f2848d2f080a502481f98c22220bfe46db3ddc" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.265996 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-cell1" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.273222 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x985t\" (UniqueName: \"kubernetes.io/projected/77045173-7140-420b-8cdf-ee7c884406f7-kube-api-access-x985t\") pod \"mariadb-client-1-cell2\" (UID: \"77045173-7140-420b-8cdf-ee7c884406f7\") " pod="openstack/mariadb-client-1-cell2" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.344324 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-cell2" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.790943 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:48:37 crc kubenswrapper[5058]: E1014 08:48:37.791699 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:48:37 crc kubenswrapper[5058]: I1014 08:48:37.981714 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-cell2"] Oct 14 08:48:38 crc kubenswrapper[5058]: I1014 08:48:38.287238 5058 generic.go:334] "Generic (PLEG): container finished" podID="77045173-7140-420b-8cdf-ee7c884406f7" containerID="726ccc7e9e96be4b2eb7d995d10ba7143bac3d060f76940d7abea176638fdb1b" exitCode=0 Oct 14 08:48:38 crc kubenswrapper[5058]: I1014 08:48:38.287547 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-cell2" event={"ID":"77045173-7140-420b-8cdf-ee7c884406f7","Type":"ContainerDied","Data":"726ccc7e9e96be4b2eb7d995d10ba7143bac3d060f76940d7abea176638fdb1b"} Oct 14 08:48:38 crc kubenswrapper[5058]: I1014 08:48:38.287681 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-cell2" event={"ID":"77045173-7140-420b-8cdf-ee7c884406f7","Type":"ContainerStarted","Data":"ed568d4d27437b2a0c9ab7dddb1152920799541c6d37e8076c951544903df002"} Oct 14 08:48:38 crc kubenswrapper[5058]: I1014 08:48:38.812014 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad4c859-6ffd-4607-9bb1-6835d5b0ef8a" path="/var/lib/kubelet/pods/aad4c859-6ffd-4607-9bb1-6835d5b0ef8a/volumes" Oct 14 08:48:39 crc kubenswrapper[5058]: I1014 08:48:39.882076 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-cell2" Oct 14 08:48:39 crc kubenswrapper[5058]: I1014 08:48:39.899168 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-cell2_77045173-7140-420b-8cdf-ee7c884406f7/mariadb-client-1-cell2/0.log" Oct 14 08:48:39 crc kubenswrapper[5058]: I1014 08:48:39.925234 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-cell2"] Oct 14 08:48:39 crc kubenswrapper[5058]: I1014 08:48:39.930219 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-cell2"] Oct 14 08:48:39 crc kubenswrapper[5058]: I1014 08:48:39.998582 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x985t\" (UniqueName: \"kubernetes.io/projected/77045173-7140-420b-8cdf-ee7c884406f7-kube-api-access-x985t\") pod \"77045173-7140-420b-8cdf-ee7c884406f7\" (UID: \"77045173-7140-420b-8cdf-ee7c884406f7\") " Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.007366 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77045173-7140-420b-8cdf-ee7c884406f7-kube-api-access-x985t" (OuterVolumeSpecName: "kube-api-access-x985t") pod "77045173-7140-420b-8cdf-ee7c884406f7" (UID: "77045173-7140-420b-8cdf-ee7c884406f7"). InnerVolumeSpecName "kube-api-access-x985t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.101652 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x985t\" (UniqueName: \"kubernetes.io/projected/77045173-7140-420b-8cdf-ee7c884406f7-kube-api-access-x985t\") on node \"crc\" DevicePath \"\"" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.310206 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed568d4d27437b2a0c9ab7dddb1152920799541c6d37e8076c951544903df002" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.310291 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-cell2" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.425759 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Oct 14 08:48:40 crc kubenswrapper[5058]: E1014 08:48:40.426398 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77045173-7140-420b-8cdf-ee7c884406f7" containerName="mariadb-client-1-cell2" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.426433 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="77045173-7140-420b-8cdf-ee7c884406f7" containerName="mariadb-client-1-cell2" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.426772 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="77045173-7140-420b-8cdf-ee7c884406f7" containerName="mariadb-client-1-cell2" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.427731 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.430501 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vjsll" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.439676 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.508428 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2krm\" (UniqueName: \"kubernetes.io/projected/844b598f-a734-4e40-a8e5-db8495db377f-kube-api-access-s2krm\") pod \"mariadb-client-2-default\" (UID: \"844b598f-a734-4e40-a8e5-db8495db377f\") " pod="openstack/mariadb-client-2-default" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.610497 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2krm\" (UniqueName: \"kubernetes.io/projected/844b598f-a734-4e40-a8e5-db8495db377f-kube-api-access-s2krm\") pod \"mariadb-client-2-default\" (UID: \"844b598f-a734-4e40-a8e5-db8495db377f\") " pod="openstack/mariadb-client-2-default" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.648396 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2krm\" (UniqueName: \"kubernetes.io/projected/844b598f-a734-4e40-a8e5-db8495db377f-kube-api-access-s2krm\") pod \"mariadb-client-2-default\" (UID: \"844b598f-a734-4e40-a8e5-db8495db377f\") " pod="openstack/mariadb-client-2-default" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.766317 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 14 08:48:40 crc kubenswrapper[5058]: I1014 08:48:40.802589 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77045173-7140-420b-8cdf-ee7c884406f7" path="/var/lib/kubelet/pods/77045173-7140-420b-8cdf-ee7c884406f7/volumes" Oct 14 08:48:41 crc kubenswrapper[5058]: I1014 08:48:41.348102 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 14 08:48:41 crc kubenswrapper[5058]: W1014 08:48:41.361961 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod844b598f_a734_4e40_a8e5_db8495db377f.slice/crio-d9631e20120f708ecd7c4e7a4fd6f71c1fde85d4179ef176e77f204f399af017 WatchSource:0}: Error finding container d9631e20120f708ecd7c4e7a4fd6f71c1fde85d4179ef176e77f204f399af017: Status 404 returned error can't find the container with id d9631e20120f708ecd7c4e7a4fd6f71c1fde85d4179ef176e77f204f399af017 Oct 14 08:48:42 crc kubenswrapper[5058]: I1014 08:48:42.335703 5058 generic.go:334] "Generic (PLEG): container finished" podID="844b598f-a734-4e40-a8e5-db8495db377f" containerID="e807dfafdf16164ef3d846156e6aff041a40d3f897e29a194fda2b9d5609be48" exitCode=0 Oct 14 08:48:42 crc kubenswrapper[5058]: I1014 08:48:42.335784 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"844b598f-a734-4e40-a8e5-db8495db377f","Type":"ContainerDied","Data":"e807dfafdf16164ef3d846156e6aff041a40d3f897e29a194fda2b9d5609be48"} Oct 14 08:48:42 crc kubenswrapper[5058]: I1014 08:48:42.336151 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"844b598f-a734-4e40-a8e5-db8495db377f","Type":"ContainerStarted","Data":"d9631e20120f708ecd7c4e7a4fd6f71c1fde85d4179ef176e77f204f399af017"} Oct 14 08:48:43 crc kubenswrapper[5058]: I1014 08:48:43.850639 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 14 08:48:43 crc kubenswrapper[5058]: I1014 08:48:43.934833 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2-default_844b598f-a734-4e40-a8e5-db8495db377f/mariadb-client-2-default/0.log" Oct 14 08:48:43 crc kubenswrapper[5058]: I1014 08:48:43.960923 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 14 08:48:43 crc kubenswrapper[5058]: I1014 08:48:43.965406 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 14 08:48:43 crc kubenswrapper[5058]: I1014 08:48:43.978271 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2krm\" (UniqueName: \"kubernetes.io/projected/844b598f-a734-4e40-a8e5-db8495db377f-kube-api-access-s2krm\") pod \"844b598f-a734-4e40-a8e5-db8495db377f\" (UID: \"844b598f-a734-4e40-a8e5-db8495db377f\") " Oct 14 08:48:43 crc kubenswrapper[5058]: I1014 08:48:43.984956 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844b598f-a734-4e40-a8e5-db8495db377f-kube-api-access-s2krm" (OuterVolumeSpecName: "kube-api-access-s2krm") pod "844b598f-a734-4e40-a8e5-db8495db377f" (UID: "844b598f-a734-4e40-a8e5-db8495db377f"). InnerVolumeSpecName "kube-api-access-s2krm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.081329 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2krm\" (UniqueName: \"kubernetes.io/projected/844b598f-a734-4e40-a8e5-db8495db377f-kube-api-access-s2krm\") on node \"crc\" DevicePath \"\"" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.379880 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9631e20120f708ecd7c4e7a4fd6f71c1fde85d4179ef176e77f204f399af017" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.379974 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.447668 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Oct 14 08:48:44 crc kubenswrapper[5058]: E1014 08:48:44.448981 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844b598f-a734-4e40-a8e5-db8495db377f" containerName="mariadb-client-2-default" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.449031 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="844b598f-a734-4e40-a8e5-db8495db377f" containerName="mariadb-client-2-default" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.449528 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="844b598f-a734-4e40-a8e5-db8495db377f" containerName="mariadb-client-2-default" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.450743 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.454268 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vjsll" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.467020 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.488409 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6cs\" (UniqueName: \"kubernetes.io/projected/08c55b8a-c198-453c-93a7-43025d5be720-kube-api-access-9k6cs\") pod \"mariadb-client-1\" (UID: \"08c55b8a-c198-453c-93a7-43025d5be720\") " pod="openstack/mariadb-client-1" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.590503 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k6cs\" (UniqueName: \"kubernetes.io/projected/08c55b8a-c198-453c-93a7-43025d5be720-kube-api-access-9k6cs\") pod \"mariadb-client-1\" (UID: \"08c55b8a-c198-453c-93a7-43025d5be720\") " pod="openstack/mariadb-client-1" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.620220 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k6cs\" (UniqueName: \"kubernetes.io/projected/08c55b8a-c198-453c-93a7-43025d5be720-kube-api-access-9k6cs\") pod \"mariadb-client-1\" (UID: \"08c55b8a-c198-453c-93a7-43025d5be720\") " pod="openstack/mariadb-client-1" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.783336 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 14 08:48:44 crc kubenswrapper[5058]: I1014 08:48:44.800530 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="844b598f-a734-4e40-a8e5-db8495db377f" path="/var/lib/kubelet/pods/844b598f-a734-4e40-a8e5-db8495db377f/volumes" Oct 14 08:48:45 crc kubenswrapper[5058]: I1014 08:48:45.328862 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Oct 14 08:48:45 crc kubenswrapper[5058]: I1014 08:48:45.388587 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"08c55b8a-c198-453c-93a7-43025d5be720","Type":"ContainerStarted","Data":"a0b06727f414eb7cafe9297310a145d6776a2dff6368cd8650e38f3bc8e62e06"} Oct 14 08:48:46 crc kubenswrapper[5058]: I1014 08:48:46.405681 5058 generic.go:334] "Generic (PLEG): container finished" podID="08c55b8a-c198-453c-93a7-43025d5be720" containerID="6e91de6cfed16e865826e1134abe3c449220735678e0a673b6509fcf57c2fbbb" exitCode=0 Oct 14 08:48:46 crc kubenswrapper[5058]: I1014 08:48:46.405787 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"08c55b8a-c198-453c-93a7-43025d5be720","Type":"ContainerDied","Data":"6e91de6cfed16e865826e1134abe3c449220735678e0a673b6509fcf57c2fbbb"} Oct 14 08:48:47 crc kubenswrapper[5058]: I1014 08:48:47.886935 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 14 08:48:47 crc kubenswrapper[5058]: I1014 08:48:47.910624 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_08c55b8a-c198-453c-93a7-43025d5be720/mariadb-client-1/0.log" Oct 14 08:48:47 crc kubenswrapper[5058]: I1014 08:48:47.939979 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Oct 14 08:48:47 crc kubenswrapper[5058]: I1014 08:48:47.946036 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Oct 14 08:48:47 crc kubenswrapper[5058]: I1014 08:48:47.952881 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k6cs\" (UniqueName: \"kubernetes.io/projected/08c55b8a-c198-453c-93a7-43025d5be720-kube-api-access-9k6cs\") pod \"08c55b8a-c198-453c-93a7-43025d5be720\" (UID: \"08c55b8a-c198-453c-93a7-43025d5be720\") " Oct 14 08:48:47 crc kubenswrapper[5058]: I1014 08:48:47.962595 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c55b8a-c198-453c-93a7-43025d5be720-kube-api-access-9k6cs" (OuterVolumeSpecName: "kube-api-access-9k6cs") pod "08c55b8a-c198-453c-93a7-43025d5be720" (UID: "08c55b8a-c198-453c-93a7-43025d5be720"). InnerVolumeSpecName "kube-api-access-9k6cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.054731 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k6cs\" (UniqueName: \"kubernetes.io/projected/08c55b8a-c198-453c-93a7-43025d5be720-kube-api-access-9k6cs\") on node \"crc\" DevicePath \"\"" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.423492 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Oct 14 08:48:48 crc kubenswrapper[5058]: E1014 08:48:48.424190 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c55b8a-c198-453c-93a7-43025d5be720" containerName="mariadb-client-1" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.424227 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c55b8a-c198-453c-93a7-43025d5be720" containerName="mariadb-client-1" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.425019 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c55b8a-c198-453c-93a7-43025d5be720" containerName="mariadb-client-1" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.426657 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.435641 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.448145 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0b06727f414eb7cafe9297310a145d6776a2dff6368cd8650e38f3bc8e62e06" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.448331 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.462625 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wwgf\" (UniqueName: \"kubernetes.io/projected/0eb96cc8-ddef-40cc-988c-63137a1cad21-kube-api-access-9wwgf\") pod \"mariadb-client-4-default\" (UID: \"0eb96cc8-ddef-40cc-988c-63137a1cad21\") " pod="openstack/mariadb-client-4-default" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.564233 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wwgf\" (UniqueName: \"kubernetes.io/projected/0eb96cc8-ddef-40cc-988c-63137a1cad21-kube-api-access-9wwgf\") pod \"mariadb-client-4-default\" (UID: \"0eb96cc8-ddef-40cc-988c-63137a1cad21\") " pod="openstack/mariadb-client-4-default" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.587818 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wwgf\" (UniqueName: \"kubernetes.io/projected/0eb96cc8-ddef-40cc-988c-63137a1cad21-kube-api-access-9wwgf\") pod \"mariadb-client-4-default\" (UID: \"0eb96cc8-ddef-40cc-988c-63137a1cad21\") " pod="openstack/mariadb-client-4-default" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.751155 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.790770 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:48:48 crc kubenswrapper[5058]: E1014 08:48:48.791384 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:48:48 crc kubenswrapper[5058]: I1014 08:48:48.814122 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c55b8a-c198-453c-93a7-43025d5be720" path="/var/lib/kubelet/pods/08c55b8a-c198-453c-93a7-43025d5be720/volumes" Oct 14 08:48:49 crc kubenswrapper[5058]: I1014 08:48:49.092639 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 14 08:48:49 crc kubenswrapper[5058]: W1014 08:48:49.098962 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb96cc8_ddef_40cc_988c_63137a1cad21.slice/crio-e99694e37f0db2a371e7af3efc04c5c700ca80e179eae692976b5a86c752b210 WatchSource:0}: Error finding container e99694e37f0db2a371e7af3efc04c5c700ca80e179eae692976b5a86c752b210: Status 404 returned error can't find the container with id e99694e37f0db2a371e7af3efc04c5c700ca80e179eae692976b5a86c752b210 Oct 14 08:48:49 crc kubenswrapper[5058]: I1014 08:48:49.458770 5058 generic.go:334] "Generic (PLEG): container finished" podID="0eb96cc8-ddef-40cc-988c-63137a1cad21" containerID="fc3be3385e20fd10f7db93a81c7cfd6981c94ac1e472e9fe5e123c77cb59aa3d" exitCode=0 Oct 14 08:48:49 crc kubenswrapper[5058]: I1014 08:48:49.458917 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"0eb96cc8-ddef-40cc-988c-63137a1cad21","Type":"ContainerDied","Data":"fc3be3385e20fd10f7db93a81c7cfd6981c94ac1e472e9fe5e123c77cb59aa3d"} Oct 14 08:48:49 crc kubenswrapper[5058]: I1014 08:48:49.459160 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"0eb96cc8-ddef-40cc-988c-63137a1cad21","Type":"ContainerStarted","Data":"e99694e37f0db2a371e7af3efc04c5c700ca80e179eae692976b5a86c752b210"} Oct 14 08:48:50 crc kubenswrapper[5058]: I1014 08:48:50.972177 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 14 08:48:50 crc kubenswrapper[5058]: I1014 08:48:50.995632 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_0eb96cc8-ddef-40cc-988c-63137a1cad21/mariadb-client-4-default/0.log" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.025257 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.030024 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.035578 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wwgf\" (UniqueName: \"kubernetes.io/projected/0eb96cc8-ddef-40cc-988c-63137a1cad21-kube-api-access-9wwgf\") pod \"0eb96cc8-ddef-40cc-988c-63137a1cad21\" (UID: \"0eb96cc8-ddef-40cc-988c-63137a1cad21\") " Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.046138 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb96cc8-ddef-40cc-988c-63137a1cad21-kube-api-access-9wwgf" (OuterVolumeSpecName: "kube-api-access-9wwgf") pod "0eb96cc8-ddef-40cc-988c-63137a1cad21" (UID: "0eb96cc8-ddef-40cc-988c-63137a1cad21"). InnerVolumeSpecName "kube-api-access-9wwgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.137761 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wwgf\" (UniqueName: \"kubernetes.io/projected/0eb96cc8-ddef-40cc-988c-63137a1cad21-kube-api-access-9wwgf\") on node \"crc\" DevicePath \"\"" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.251057 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-cell1"] Oct 14 08:48:51 crc kubenswrapper[5058]: E1014 08:48:51.251667 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb96cc8-ddef-40cc-988c-63137a1cad21" containerName="mariadb-client-4-default" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.251703 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb96cc8-ddef-40cc-988c-63137a1cad21" containerName="mariadb-client-4-default" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.252125 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb96cc8-ddef-40cc-988c-63137a1cad21" containerName="mariadb-client-4-default" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.253257 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-cell1" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.262776 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-cell1"] Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.341686 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4626\" (UniqueName: \"kubernetes.io/projected/ee438548-586f-45df-9b19-39399e74ed07-kube-api-access-v4626\") pod \"mariadb-client-4-cell1\" (UID: \"ee438548-586f-45df-9b19-39399e74ed07\") " pod="openstack/mariadb-client-4-cell1" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.443575 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4626\" (UniqueName: \"kubernetes.io/projected/ee438548-586f-45df-9b19-39399e74ed07-kube-api-access-v4626\") pod \"mariadb-client-4-cell1\" (UID: \"ee438548-586f-45df-9b19-39399e74ed07\") " pod="openstack/mariadb-client-4-cell1" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.482007 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4626\" (UniqueName: \"kubernetes.io/projected/ee438548-586f-45df-9b19-39399e74ed07-kube-api-access-v4626\") pod \"mariadb-client-4-cell1\" (UID: \"ee438548-586f-45df-9b19-39399e74ed07\") " pod="openstack/mariadb-client-4-cell1" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.485529 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e99694e37f0db2a371e7af3efc04c5c700ca80e179eae692976b5a86c752b210" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.485604 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 14 08:48:51 crc kubenswrapper[5058]: I1014 08:48:51.587770 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-cell1" Oct 14 08:48:52 crc kubenswrapper[5058]: W1014 08:48:52.243997 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee438548_586f_45df_9b19_39399e74ed07.slice/crio-c9d130799d9e6fef9f9c12e1d2f7192ef47e52e564d9f0a17a13baeebafa4182 WatchSource:0}: Error finding container c9d130799d9e6fef9f9c12e1d2f7192ef47e52e564d9f0a17a13baeebafa4182: Status 404 returned error can't find the container with id c9d130799d9e6fef9f9c12e1d2f7192ef47e52e564d9f0a17a13baeebafa4182 Oct 14 08:48:52 crc kubenswrapper[5058]: I1014 08:48:52.245095 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-cell1"] Oct 14 08:48:52 crc kubenswrapper[5058]: I1014 08:48:52.497351 5058 generic.go:334] "Generic (PLEG): container finished" podID="ee438548-586f-45df-9b19-39399e74ed07" containerID="77a78a32632108da5f42d375cb52a450c2f6e1b4d888e60a71698117d290869b" exitCode=0 Oct 14 08:48:52 crc kubenswrapper[5058]: I1014 08:48:52.497435 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-cell1" event={"ID":"ee438548-586f-45df-9b19-39399e74ed07","Type":"ContainerDied","Data":"77a78a32632108da5f42d375cb52a450c2f6e1b4d888e60a71698117d290869b"} Oct 14 08:48:52 crc kubenswrapper[5058]: I1014 08:48:52.497484 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-cell1" event={"ID":"ee438548-586f-45df-9b19-39399e74ed07","Type":"ContainerStarted","Data":"c9d130799d9e6fef9f9c12e1d2f7192ef47e52e564d9f0a17a13baeebafa4182"} Oct 14 08:48:52 crc kubenswrapper[5058]: I1014 08:48:52.803130 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb96cc8-ddef-40cc-988c-63137a1cad21" path="/var/lib/kubelet/pods/0eb96cc8-ddef-40cc-988c-63137a1cad21/volumes" Oct 14 08:48:53 crc kubenswrapper[5058]: I1014 08:48:53.966383 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-cell1" Oct 14 08:48:53 crc kubenswrapper[5058]: I1014 08:48:53.983770 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-cell1_ee438548-586f-45df-9b19-39399e74ed07/mariadb-client-4-cell1/0.log" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.017942 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-cell1"] Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.024289 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-cell1"] Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.090742 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4626\" (UniqueName: \"kubernetes.io/projected/ee438548-586f-45df-9b19-39399e74ed07-kube-api-access-v4626\") pod \"ee438548-586f-45df-9b19-39399e74ed07\" (UID: \"ee438548-586f-45df-9b19-39399e74ed07\") " Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.098289 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee438548-586f-45df-9b19-39399e74ed07-kube-api-access-v4626" (OuterVolumeSpecName: "kube-api-access-v4626") pod "ee438548-586f-45df-9b19-39399e74ed07" (UID: "ee438548-586f-45df-9b19-39399e74ed07"). InnerVolumeSpecName "kube-api-access-v4626". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.194198 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4626\" (UniqueName: \"kubernetes.io/projected/ee438548-586f-45df-9b19-39399e74ed07-kube-api-access-v4626\") on node \"crc\" DevicePath \"\"" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.212991 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-cell2"] Oct 14 08:48:54 crc kubenswrapper[5058]: E1014 08:48:54.213532 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee438548-586f-45df-9b19-39399e74ed07" containerName="mariadb-client-4-cell1" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.213554 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee438548-586f-45df-9b19-39399e74ed07" containerName="mariadb-client-4-cell1" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.213843 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee438548-586f-45df-9b19-39399e74ed07" containerName="mariadb-client-4-cell1" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.214674 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-cell2" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.231310 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-cell2"] Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.296469 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vrs5\" (UniqueName: \"kubernetes.io/projected/9b9a3367-c307-47d0-b835-9a40562fb374-kube-api-access-9vrs5\") pod \"mariadb-client-4-cell2\" (UID: \"9b9a3367-c307-47d0-b835-9a40562fb374\") " pod="openstack/mariadb-client-4-cell2" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.398173 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vrs5\" (UniqueName: \"kubernetes.io/projected/9b9a3367-c307-47d0-b835-9a40562fb374-kube-api-access-9vrs5\") pod \"mariadb-client-4-cell2\" (UID: \"9b9a3367-c307-47d0-b835-9a40562fb374\") " pod="openstack/mariadb-client-4-cell2" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.420282 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vrs5\" (UniqueName: \"kubernetes.io/projected/9b9a3367-c307-47d0-b835-9a40562fb374-kube-api-access-9vrs5\") pod \"mariadb-client-4-cell2\" (UID: \"9b9a3367-c307-47d0-b835-9a40562fb374\") " pod="openstack/mariadb-client-4-cell2" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.523162 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9d130799d9e6fef9f9c12e1d2f7192ef47e52e564d9f0a17a13baeebafa4182" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.523215 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-cell1" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.557906 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-cell2" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.810373 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee438548-586f-45df-9b19-39399e74ed07" path="/var/lib/kubelet/pods/ee438548-586f-45df-9b19-39399e74ed07/volumes" Oct 14 08:48:54 crc kubenswrapper[5058]: I1014 08:48:54.941904 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-cell2"] Oct 14 08:48:54 crc kubenswrapper[5058]: W1014 08:48:54.955339 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9a3367_c307_47d0_b835_9a40562fb374.slice/crio-96d547ff78cb01b5d8629aa6ac261a4750c8ff8df4efdb66fc2f3e76bb210111 WatchSource:0}: Error finding container 96d547ff78cb01b5d8629aa6ac261a4750c8ff8df4efdb66fc2f3e76bb210111: Status 404 returned error can't find the container with id 96d547ff78cb01b5d8629aa6ac261a4750c8ff8df4efdb66fc2f3e76bb210111 Oct 14 08:48:55 crc kubenswrapper[5058]: I1014 08:48:55.536560 5058 generic.go:334] "Generic (PLEG): container finished" podID="9b9a3367-c307-47d0-b835-9a40562fb374" containerID="79988468a053f7615ddbefe7b4fa99c11c776fb27848af1ca8ecb9d0b54b7e90" exitCode=0 Oct 14 08:48:55 crc kubenswrapper[5058]: I1014 08:48:55.536673 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-cell2" event={"ID":"9b9a3367-c307-47d0-b835-9a40562fb374","Type":"ContainerDied","Data":"79988468a053f7615ddbefe7b4fa99c11c776fb27848af1ca8ecb9d0b54b7e90"} Oct 14 08:48:55 crc kubenswrapper[5058]: I1014 08:48:55.536975 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-cell2" event={"ID":"9b9a3367-c307-47d0-b835-9a40562fb374","Type":"ContainerStarted","Data":"96d547ff78cb01b5d8629aa6ac261a4750c8ff8df4efdb66fc2f3e76bb210111"} Oct 14 08:48:57 crc kubenswrapper[5058]: I1014 08:48:57.101628 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-cell2" Oct 14 08:48:57 crc kubenswrapper[5058]: I1014 08:48:57.129047 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-cell2_9b9a3367-c307-47d0-b835-9a40562fb374/mariadb-client-4-cell2/0.log" Oct 14 08:48:57 crc kubenswrapper[5058]: I1014 08:48:57.148691 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vrs5\" (UniqueName: \"kubernetes.io/projected/9b9a3367-c307-47d0-b835-9a40562fb374-kube-api-access-9vrs5\") pod \"9b9a3367-c307-47d0-b835-9a40562fb374\" (UID: \"9b9a3367-c307-47d0-b835-9a40562fb374\") " Oct 14 08:48:57 crc kubenswrapper[5058]: I1014 08:48:57.159913 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9a3367-c307-47d0-b835-9a40562fb374-kube-api-access-9vrs5" (OuterVolumeSpecName: "kube-api-access-9vrs5") pod "9b9a3367-c307-47d0-b835-9a40562fb374" (UID: "9b9a3367-c307-47d0-b835-9a40562fb374"). InnerVolumeSpecName "kube-api-access-9vrs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:48:57 crc kubenswrapper[5058]: I1014 08:48:57.168276 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-cell2"] Oct 14 08:48:57 crc kubenswrapper[5058]: I1014 08:48:57.179286 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-cell2"] Oct 14 08:48:57 crc kubenswrapper[5058]: I1014 08:48:57.259641 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vrs5\" (UniqueName: \"kubernetes.io/projected/9b9a3367-c307-47d0-b835-9a40562fb374-kube-api-access-9vrs5\") on node \"crc\" DevicePath \"\"" Oct 14 08:48:57 crc kubenswrapper[5058]: I1014 08:48:57.571310 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d547ff78cb01b5d8629aa6ac261a4750c8ff8df4efdb66fc2f3e76bb210111" Oct 14 08:48:57 crc kubenswrapper[5058]: I1014 08:48:57.571350 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-cell2" Oct 14 08:48:58 crc kubenswrapper[5058]: I1014 08:48:58.813111 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9a3367-c307-47d0-b835-9a40562fb374" path="/var/lib/kubelet/pods/9b9a3367-c307-47d0-b835-9a40562fb374/volumes" Oct 14 08:49:00 crc kubenswrapper[5058]: I1014 08:49:00.790130 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:49:00 crc kubenswrapper[5058]: E1014 08:49:00.790885 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.033235 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Oct 14 08:49:01 crc kubenswrapper[5058]: E1014 08:49:01.033610 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9a3367-c307-47d0-b835-9a40562fb374" containerName="mariadb-client-4-cell2" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.033630 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9a3367-c307-47d0-b835-9a40562fb374" containerName="mariadb-client-4-cell2" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.033913 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9a3367-c307-47d0-b835-9a40562fb374" containerName="mariadb-client-4-cell2" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.034569 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.042369 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vjsll" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.046963 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.127384 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf48v\" (UniqueName: \"kubernetes.io/projected/4cef1c91-2aaf-4d56-b5ab-156990143ccd-kube-api-access-qf48v\") pod \"mariadb-client-5-default\" (UID: \"4cef1c91-2aaf-4d56-b5ab-156990143ccd\") " pod="openstack/mariadb-client-5-default" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.230035 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf48v\" (UniqueName: \"kubernetes.io/projected/4cef1c91-2aaf-4d56-b5ab-156990143ccd-kube-api-access-qf48v\") pod \"mariadb-client-5-default\" (UID: \"4cef1c91-2aaf-4d56-b5ab-156990143ccd\") " pod="openstack/mariadb-client-5-default" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.266722 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf48v\" (UniqueName: \"kubernetes.io/projected/4cef1c91-2aaf-4d56-b5ab-156990143ccd-kube-api-access-qf48v\") pod \"mariadb-client-5-default\" (UID: \"4cef1c91-2aaf-4d56-b5ab-156990143ccd\") " pod="openstack/mariadb-client-5-default" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.371242 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 14 08:49:01 crc kubenswrapper[5058]: I1014 08:49:01.925540 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 14 08:49:02 crc kubenswrapper[5058]: I1014 08:49:02.632540 5058 generic.go:334] "Generic (PLEG): container finished" podID="4cef1c91-2aaf-4d56-b5ab-156990143ccd" containerID="5be4b49a7de26b11a62dd57bfb17e7b030709cd494a9bff50d8c76a1dc793235" exitCode=0 Oct 14 08:49:02 crc kubenswrapper[5058]: I1014 08:49:02.632657 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"4cef1c91-2aaf-4d56-b5ab-156990143ccd","Type":"ContainerDied","Data":"5be4b49a7de26b11a62dd57bfb17e7b030709cd494a9bff50d8c76a1dc793235"} Oct 14 08:49:02 crc kubenswrapper[5058]: I1014 08:49:02.633091 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"4cef1c91-2aaf-4d56-b5ab-156990143ccd","Type":"ContainerStarted","Data":"e3f65dcb2c2a686e3c3a134675eb0401aa9638515c4b6ddebd975f7cd9095bb6"} Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.124132 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.146479 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_4cef1c91-2aaf-4d56-b5ab-156990143ccd/mariadb-client-5-default/0.log" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.173477 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.178770 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.310319 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf48v\" (UniqueName: \"kubernetes.io/projected/4cef1c91-2aaf-4d56-b5ab-156990143ccd-kube-api-access-qf48v\") pod \"4cef1c91-2aaf-4d56-b5ab-156990143ccd\" (UID: \"4cef1c91-2aaf-4d56-b5ab-156990143ccd\") " Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.318672 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cef1c91-2aaf-4d56-b5ab-156990143ccd-kube-api-access-qf48v" (OuterVolumeSpecName: "kube-api-access-qf48v") pod "4cef1c91-2aaf-4d56-b5ab-156990143ccd" (UID: "4cef1c91-2aaf-4d56-b5ab-156990143ccd"). InnerVolumeSpecName "kube-api-access-qf48v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.336561 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Oct 14 08:49:04 crc kubenswrapper[5058]: E1014 08:49:04.337125 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cef1c91-2aaf-4d56-b5ab-156990143ccd" containerName="mariadb-client-5-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.337158 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cef1c91-2aaf-4d56-b5ab-156990143ccd" containerName="mariadb-client-5-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.337415 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cef1c91-2aaf-4d56-b5ab-156990143ccd" containerName="mariadb-client-5-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.338236 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.347917 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.412979 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4h4g\" (UniqueName: \"kubernetes.io/projected/d3cac284-46f0-4d7e-8df4-04e1186d5752-kube-api-access-b4h4g\") pod \"mariadb-client-6-default\" (UID: \"d3cac284-46f0-4d7e-8df4-04e1186d5752\") " pod="openstack/mariadb-client-6-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.413273 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf48v\" (UniqueName: \"kubernetes.io/projected/4cef1c91-2aaf-4d56-b5ab-156990143ccd-kube-api-access-qf48v\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.514406 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4h4g\" (UniqueName: \"kubernetes.io/projected/d3cac284-46f0-4d7e-8df4-04e1186d5752-kube-api-access-b4h4g\") pod \"mariadb-client-6-default\" (UID: \"d3cac284-46f0-4d7e-8df4-04e1186d5752\") " pod="openstack/mariadb-client-6-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.545289 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4h4g\" (UniqueName: \"kubernetes.io/projected/d3cac284-46f0-4d7e-8df4-04e1186d5752-kube-api-access-b4h4g\") pod \"mariadb-client-6-default\" (UID: \"d3cac284-46f0-4d7e-8df4-04e1186d5752\") " pod="openstack/mariadb-client-6-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.654629 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3f65dcb2c2a686e3c3a134675eb0401aa9638515c4b6ddebd975f7cd9095bb6" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.654724 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.713686 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 14 08:49:04 crc kubenswrapper[5058]: I1014 08:49:04.811746 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cef1c91-2aaf-4d56-b5ab-156990143ccd" path="/var/lib/kubelet/pods/4cef1c91-2aaf-4d56-b5ab-156990143ccd/volumes" Oct 14 08:49:05 crc kubenswrapper[5058]: I1014 08:49:05.289276 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 14 08:49:05 crc kubenswrapper[5058]: W1014 08:49:05.293397 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3cac284_46f0_4d7e_8df4_04e1186d5752.slice/crio-18db942b6add0d08feb9a4967d606f69a3cd7bd7a08097fef025296f0696c85d WatchSource:0}: Error finding container 18db942b6add0d08feb9a4967d606f69a3cd7bd7a08097fef025296f0696c85d: Status 404 returned error can't find the container with id 18db942b6add0d08feb9a4967d606f69a3cd7bd7a08097fef025296f0696c85d Oct 14 08:49:05 crc kubenswrapper[5058]: I1014 08:49:05.668680 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"d3cac284-46f0-4d7e-8df4-04e1186d5752","Type":"ContainerStarted","Data":"627182e8dd3290dffe4479d2725192a766ba4950578c3060ff7868f526ef61d5"} Oct 14 08:49:05 crc kubenswrapper[5058]: I1014 08:49:05.669199 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"d3cac284-46f0-4d7e-8df4-04e1186d5752","Type":"ContainerStarted","Data":"18db942b6add0d08feb9a4967d606f69a3cd7bd7a08097fef025296f0696c85d"} Oct 14 08:49:05 crc kubenswrapper[5058]: I1014 08:49:05.690642 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.69061052 podStartE2EDuration="1.69061052s" podCreationTimestamp="2025-10-14 08:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:49:05.687238701 +0000 UTC m=+7293.598322537" watchObservedRunningTime="2025-10-14 08:49:05.69061052 +0000 UTC m=+7293.601694366" Oct 14 08:49:06 crc kubenswrapper[5058]: I1014 08:49:06.679598 5058 generic.go:334] "Generic (PLEG): container finished" podID="d3cac284-46f0-4d7e-8df4-04e1186d5752" containerID="627182e8dd3290dffe4479d2725192a766ba4950578c3060ff7868f526ef61d5" exitCode=0 Oct 14 08:49:06 crc kubenswrapper[5058]: I1014 08:49:06.679688 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"d3cac284-46f0-4d7e-8df4-04e1186d5752","Type":"ContainerDied","Data":"627182e8dd3290dffe4479d2725192a766ba4950578c3060ff7868f526ef61d5"} Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.138320 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.185171 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.191369 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.282012 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4h4g\" (UniqueName: \"kubernetes.io/projected/d3cac284-46f0-4d7e-8df4-04e1186d5752-kube-api-access-b4h4g\") pod \"d3cac284-46f0-4d7e-8df4-04e1186d5752\" (UID: \"d3cac284-46f0-4d7e-8df4-04e1186d5752\") " Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.287752 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3cac284-46f0-4d7e-8df4-04e1186d5752-kube-api-access-b4h4g" (OuterVolumeSpecName: "kube-api-access-b4h4g") pod "d3cac284-46f0-4d7e-8df4-04e1186d5752" (UID: "d3cac284-46f0-4d7e-8df4-04e1186d5752"). InnerVolumeSpecName "kube-api-access-b4h4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.353034 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Oct 14 08:49:08 crc kubenswrapper[5058]: E1014 08:49:08.353457 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3cac284-46f0-4d7e-8df4-04e1186d5752" containerName="mariadb-client-6-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.353484 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3cac284-46f0-4d7e-8df4-04e1186d5752" containerName="mariadb-client-6-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.353921 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3cac284-46f0-4d7e-8df4-04e1186d5752" containerName="mariadb-client-6-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.354595 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.366525 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.384184 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4h4g\" (UniqueName: \"kubernetes.io/projected/d3cac284-46f0-4d7e-8df4-04e1186d5752-kube-api-access-b4h4g\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.486217 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbvjr\" (UniqueName: \"kubernetes.io/projected/633e06f6-56ff-4fc7-8a4f-f33cac590274-kube-api-access-cbvjr\") pod \"mariadb-client-7-default\" (UID: \"633e06f6-56ff-4fc7-8a4f-f33cac590274\") " pod="openstack/mariadb-client-7-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.588544 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbvjr\" (UniqueName: \"kubernetes.io/projected/633e06f6-56ff-4fc7-8a4f-f33cac590274-kube-api-access-cbvjr\") pod \"mariadb-client-7-default\" (UID: \"633e06f6-56ff-4fc7-8a4f-f33cac590274\") " pod="openstack/mariadb-client-7-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.620018 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbvjr\" (UniqueName: \"kubernetes.io/projected/633e06f6-56ff-4fc7-8a4f-f33cac590274-kube-api-access-cbvjr\") pod \"mariadb-client-7-default\" (UID: \"633e06f6-56ff-4fc7-8a4f-f33cac590274\") " pod="openstack/mariadb-client-7-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.675576 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.715986 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18db942b6add0d08feb9a4967d606f69a3cd7bd7a08097fef025296f0696c85d" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.716140 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 14 08:49:08 crc kubenswrapper[5058]: I1014 08:49:08.816551 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3cac284-46f0-4d7e-8df4-04e1186d5752" path="/var/lib/kubelet/pods/d3cac284-46f0-4d7e-8df4-04e1186d5752/volumes" Oct 14 08:49:09 crc kubenswrapper[5058]: I1014 08:49:09.045075 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 14 08:49:09 crc kubenswrapper[5058]: W1014 08:49:09.053024 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod633e06f6_56ff_4fc7_8a4f_f33cac590274.slice/crio-936a90a2ae9ed2aa56f96da91b915f42d98cca2e3c908ba4164eccc3e8324742 WatchSource:0}: Error finding container 936a90a2ae9ed2aa56f96da91b915f42d98cca2e3c908ba4164eccc3e8324742: Status 404 returned error can't find the container with id 936a90a2ae9ed2aa56f96da91b915f42d98cca2e3c908ba4164eccc3e8324742 Oct 14 08:49:09 crc kubenswrapper[5058]: I1014 08:49:09.727961 5058 generic.go:334] "Generic (PLEG): container finished" podID="633e06f6-56ff-4fc7-8a4f-f33cac590274" containerID="11d3c89b934c491e2ad167f10bda427747a8d08096cdee2f1767a2df1831f862" exitCode=0 Oct 14 08:49:09 crc kubenswrapper[5058]: I1014 08:49:09.728007 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"633e06f6-56ff-4fc7-8a4f-f33cac590274","Type":"ContainerDied","Data":"11d3c89b934c491e2ad167f10bda427747a8d08096cdee2f1767a2df1831f862"} Oct 14 08:49:09 crc kubenswrapper[5058]: I1014 08:49:09.728033 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"633e06f6-56ff-4fc7-8a4f-f33cac590274","Type":"ContainerStarted","Data":"936a90a2ae9ed2aa56f96da91b915f42d98cca2e3c908ba4164eccc3e8324742"} Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.265399 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.288212 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_633e06f6-56ff-4fc7-8a4f-f33cac590274/mariadb-client-7-default/0.log" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.320923 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.326178 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.447449 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbvjr\" (UniqueName: \"kubernetes.io/projected/633e06f6-56ff-4fc7-8a4f-f33cac590274-kube-api-access-cbvjr\") pod \"633e06f6-56ff-4fc7-8a4f-f33cac590274\" (UID: \"633e06f6-56ff-4fc7-8a4f-f33cac590274\") " Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.459536 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633e06f6-56ff-4fc7-8a4f-f33cac590274-kube-api-access-cbvjr" (OuterVolumeSpecName: "kube-api-access-cbvjr") pod "633e06f6-56ff-4fc7-8a4f-f33cac590274" (UID: "633e06f6-56ff-4fc7-8a4f-f33cac590274"). InnerVolumeSpecName "kube-api-access-cbvjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.492049 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:11 crc kubenswrapper[5058]: E1014 08:49:11.492775 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633e06f6-56ff-4fc7-8a4f-f33cac590274" containerName="mariadb-client-7-default" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.492823 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="633e06f6-56ff-4fc7-8a4f-f33cac590274" containerName="mariadb-client-7-default" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.493096 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="633e06f6-56ff-4fc7-8a4f-f33cac590274" containerName="mariadb-client-7-default" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.494068 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.502886 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.550302 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf8r4\" (UniqueName: \"kubernetes.io/projected/761cecc2-6ab8-47a4-8819-f15976436eba-kube-api-access-zf8r4\") pod \"mariadb-client-2\" (UID: \"761cecc2-6ab8-47a4-8819-f15976436eba\") " pod="openstack/mariadb-client-2" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.550486 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbvjr\" (UniqueName: \"kubernetes.io/projected/633e06f6-56ff-4fc7-8a4f-f33cac590274-kube-api-access-cbvjr\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.652661 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf8r4\" (UniqueName: \"kubernetes.io/projected/761cecc2-6ab8-47a4-8819-f15976436eba-kube-api-access-zf8r4\") pod \"mariadb-client-2\" (UID: \"761cecc2-6ab8-47a4-8819-f15976436eba\") " pod="openstack/mariadb-client-2" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.685115 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf8r4\" (UniqueName: \"kubernetes.io/projected/761cecc2-6ab8-47a4-8819-f15976436eba-kube-api-access-zf8r4\") pod \"mariadb-client-2\" (UID: \"761cecc2-6ab8-47a4-8819-f15976436eba\") " pod="openstack/mariadb-client-2" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.754712 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936a90a2ae9ed2aa56f96da91b915f42d98cca2e3c908ba4164eccc3e8324742" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.754823 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.789986 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:49:11 crc kubenswrapper[5058]: E1014 08:49:11.790308 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:49:11 crc kubenswrapper[5058]: I1014 08:49:11.865003 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:12 crc kubenswrapper[5058]: I1014 08:49:12.535044 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:12 crc kubenswrapper[5058]: I1014 08:49:12.762959 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"761cecc2-6ab8-47a4-8819-f15976436eba","Type":"ContainerStarted","Data":"235d8ff4a37087823faa80ef886ae9406532282fb8465efa6c327c945632ba5a"} Oct 14 08:49:12 crc kubenswrapper[5058]: I1014 08:49:12.763000 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"761cecc2-6ab8-47a4-8819-f15976436eba","Type":"ContainerStarted","Data":"783376c219f43ff717bbb7e548d7d0c2d0c500cd5f3eaa247d35924a4f75a502"} Oct 14 08:49:12 crc kubenswrapper[5058]: I1014 08:49:12.781771 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2" podStartSLOduration=1.781756547 podStartE2EDuration="1.781756547s" podCreationTimestamp="2025-10-14 08:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:49:12.777842382 +0000 UTC m=+7300.688926198" watchObservedRunningTime="2025-10-14 08:49:12.781756547 +0000 UTC m=+7300.692840353" Oct 14 08:49:12 crc kubenswrapper[5058]: I1014 08:49:12.801540 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633e06f6-56ff-4fc7-8a4f-f33cac590274" path="/var/lib/kubelet/pods/633e06f6-56ff-4fc7-8a4f-f33cac590274/volumes" Oct 14 08:49:12 crc kubenswrapper[5058]: I1014 08:49:12.846942 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_761cecc2-6ab8-47a4-8819-f15976436eba/mariadb-client-2/0.log" Oct 14 08:49:13 crc kubenswrapper[5058]: I1014 08:49:13.776529 5058 generic.go:334] "Generic (PLEG): container finished" podID="761cecc2-6ab8-47a4-8819-f15976436eba" containerID="235d8ff4a37087823faa80ef886ae9406532282fb8465efa6c327c945632ba5a" exitCode=0 Oct 14 08:49:13 crc kubenswrapper[5058]: I1014 08:49:13.776620 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"761cecc2-6ab8-47a4-8819-f15976436eba","Type":"ContainerDied","Data":"235d8ff4a37087823faa80ef886ae9406532282fb8465efa6c327c945632ba5a"} Oct 14 08:49:15 crc kubenswrapper[5058]: I1014 08:49:15.263498 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:15 crc kubenswrapper[5058]: I1014 08:49:15.303794 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:15 crc kubenswrapper[5058]: I1014 08:49:15.312005 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:15 crc kubenswrapper[5058]: I1014 08:49:15.420376 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf8r4\" (UniqueName: \"kubernetes.io/projected/761cecc2-6ab8-47a4-8819-f15976436eba-kube-api-access-zf8r4\") pod \"761cecc2-6ab8-47a4-8819-f15976436eba\" (UID: \"761cecc2-6ab8-47a4-8819-f15976436eba\") " Oct 14 08:49:15 crc kubenswrapper[5058]: I1014 08:49:15.428643 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761cecc2-6ab8-47a4-8819-f15976436eba-kube-api-access-zf8r4" (OuterVolumeSpecName: "kube-api-access-zf8r4") pod "761cecc2-6ab8-47a4-8819-f15976436eba" (UID: "761cecc2-6ab8-47a4-8819-f15976436eba"). InnerVolumeSpecName "kube-api-access-zf8r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:15 crc kubenswrapper[5058]: I1014 08:49:15.522747 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf8r4\" (UniqueName: \"kubernetes.io/projected/761cecc2-6ab8-47a4-8819-f15976436eba-kube-api-access-zf8r4\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:15 crc kubenswrapper[5058]: I1014 08:49:15.799910 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783376c219f43ff717bbb7e548d7d0c2d0c500cd5f3eaa247d35924a4f75a502" Oct 14 08:49:15 crc kubenswrapper[5058]: I1014 08:49:15.800019 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:16 crc kubenswrapper[5058]: I1014 08:49:16.805979 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="761cecc2-6ab8-47a4-8819-f15976436eba" path="/var/lib/kubelet/pods/761cecc2-6ab8-47a4-8819-f15976436eba/volumes" Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.449981 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-cell1"] Oct 14 08:49:18 crc kubenswrapper[5058]: E1014 08:49:18.454431 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761cecc2-6ab8-47a4-8819-f15976436eba" containerName="mariadb-client-2" Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.454492 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="761cecc2-6ab8-47a4-8819-f15976436eba" containerName="mariadb-client-2" Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.454832 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="761cecc2-6ab8-47a4-8819-f15976436eba" containerName="mariadb-client-2" Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.456148 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-cell1" Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.460641 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vjsll" Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.466425 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-cell1"] Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.490251 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8w2k\" (UniqueName: \"kubernetes.io/projected/beb3a5b8-6f8a-494a-a73f-700780685eaa-kube-api-access-x8w2k\") pod \"mariadb-client-5-cell1\" (UID: \"beb3a5b8-6f8a-494a-a73f-700780685eaa\") " pod="openstack/mariadb-client-5-cell1" Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.593636 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8w2k\" (UniqueName: \"kubernetes.io/projected/beb3a5b8-6f8a-494a-a73f-700780685eaa-kube-api-access-x8w2k\") pod \"mariadb-client-5-cell1\" (UID: \"beb3a5b8-6f8a-494a-a73f-700780685eaa\") " pod="openstack/mariadb-client-5-cell1" Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.627107 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8w2k\" (UniqueName: \"kubernetes.io/projected/beb3a5b8-6f8a-494a-a73f-700780685eaa-kube-api-access-x8w2k\") pod \"mariadb-client-5-cell1\" (UID: \"beb3a5b8-6f8a-494a-a73f-700780685eaa\") " pod="openstack/mariadb-client-5-cell1" Oct 14 08:49:18 crc kubenswrapper[5058]: I1014 08:49:18.786264 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-cell1" Oct 14 08:49:19 crc kubenswrapper[5058]: I1014 08:49:19.430988 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-cell1"] Oct 14 08:49:19 crc kubenswrapper[5058]: I1014 08:49:19.849333 5058 generic.go:334] "Generic (PLEG): container finished" podID="beb3a5b8-6f8a-494a-a73f-700780685eaa" containerID="6209a2805194e7f00eb5e8bd78e9194796982d393de338bdd2a37787303d15da" exitCode=0 Oct 14 08:49:19 crc kubenswrapper[5058]: I1014 08:49:19.849755 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-cell1" event={"ID":"beb3a5b8-6f8a-494a-a73f-700780685eaa","Type":"ContainerDied","Data":"6209a2805194e7f00eb5e8bd78e9194796982d393de338bdd2a37787303d15da"} Oct 14 08:49:19 crc kubenswrapper[5058]: I1014 08:49:19.849931 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-cell1" event={"ID":"beb3a5b8-6f8a-494a-a73f-700780685eaa","Type":"ContainerStarted","Data":"9971ec374ccafb919e0b2b5156e18d5dfc0df90464d1a5cbe2d97094f056e355"} Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.317523 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-cell1" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.339936 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-cell1_beb3a5b8-6f8a-494a-a73f-700780685eaa/mariadb-client-5-cell1/0.log" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.377610 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-cell1"] Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.385541 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-cell1"] Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.440629 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8w2k\" (UniqueName: \"kubernetes.io/projected/beb3a5b8-6f8a-494a-a73f-700780685eaa-kube-api-access-x8w2k\") pod \"beb3a5b8-6f8a-494a-a73f-700780685eaa\" (UID: \"beb3a5b8-6f8a-494a-a73f-700780685eaa\") " Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.449067 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb3a5b8-6f8a-494a-a73f-700780685eaa-kube-api-access-x8w2k" (OuterVolumeSpecName: "kube-api-access-x8w2k") pod "beb3a5b8-6f8a-494a-a73f-700780685eaa" (UID: "beb3a5b8-6f8a-494a-a73f-700780685eaa"). InnerVolumeSpecName "kube-api-access-x8w2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.542601 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8w2k\" (UniqueName: \"kubernetes.io/projected/beb3a5b8-6f8a-494a-a73f-700780685eaa-kube-api-access-x8w2k\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.572825 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-cell1"] Oct 14 08:49:21 crc kubenswrapper[5058]: E1014 08:49:21.573376 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb3a5b8-6f8a-494a-a73f-700780685eaa" containerName="mariadb-client-5-cell1" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.573413 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb3a5b8-6f8a-494a-a73f-700780685eaa" containerName="mariadb-client-5-cell1" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.573859 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb3a5b8-6f8a-494a-a73f-700780685eaa" containerName="mariadb-client-5-cell1" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.575144 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-cell1" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.579326 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-cell1"] Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.644644 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ckkd\" (UniqueName: \"kubernetes.io/projected/63dc6cf4-b662-4c09-b37f-bfa8cab45bd6-kube-api-access-2ckkd\") pod \"mariadb-client-6-cell1\" (UID: \"63dc6cf4-b662-4c09-b37f-bfa8cab45bd6\") " pod="openstack/mariadb-client-6-cell1" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.745963 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ckkd\" (UniqueName: \"kubernetes.io/projected/63dc6cf4-b662-4c09-b37f-bfa8cab45bd6-kube-api-access-2ckkd\") pod \"mariadb-client-6-cell1\" (UID: \"63dc6cf4-b662-4c09-b37f-bfa8cab45bd6\") " pod="openstack/mariadb-client-6-cell1" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.780574 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ckkd\" (UniqueName: \"kubernetes.io/projected/63dc6cf4-b662-4c09-b37f-bfa8cab45bd6-kube-api-access-2ckkd\") pod \"mariadb-client-6-cell1\" (UID: \"63dc6cf4-b662-4c09-b37f-bfa8cab45bd6\") " pod="openstack/mariadb-client-6-cell1" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.881290 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9971ec374ccafb919e0b2b5156e18d5dfc0df90464d1a5cbe2d97094f056e355" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.881601 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-cell1" Oct 14 08:49:21 crc kubenswrapper[5058]: I1014 08:49:21.905297 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-cell1" Oct 14 08:49:22 crc kubenswrapper[5058]: I1014 08:49:22.513744 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-cell1"] Oct 14 08:49:22 crc kubenswrapper[5058]: I1014 08:49:22.803897 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb3a5b8-6f8a-494a-a73f-700780685eaa" path="/var/lib/kubelet/pods/beb3a5b8-6f8a-494a-a73f-700780685eaa/volumes" Oct 14 08:49:22 crc kubenswrapper[5058]: I1014 08:49:22.896610 5058 generic.go:334] "Generic (PLEG): container finished" podID="63dc6cf4-b662-4c09-b37f-bfa8cab45bd6" containerID="6a1f77c3fcf2d388f9eaa845c2c7c8449d92bb47b70c8c68b43edf8e5d985442" exitCode=0 Oct 14 08:49:22 crc kubenswrapper[5058]: I1014 08:49:22.897229 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-cell1" event={"ID":"63dc6cf4-b662-4c09-b37f-bfa8cab45bd6","Type":"ContainerDied","Data":"6a1f77c3fcf2d388f9eaa845c2c7c8449d92bb47b70c8c68b43edf8e5d985442"} Oct 14 08:49:22 crc kubenswrapper[5058]: I1014 08:49:22.897443 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-cell1" event={"ID":"63dc6cf4-b662-4c09-b37f-bfa8cab45bd6","Type":"ContainerStarted","Data":"e77b74ea90e47c29296a00f8c1b6f31839f7093a8f4ad3034adf16042441734b"} Oct 14 08:49:23 crc kubenswrapper[5058]: I1014 08:49:23.790237 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:49:23 crc kubenswrapper[5058]: E1014 08:49:23.791025 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.480968 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-cell1" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.516296 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-cell1_63dc6cf4-b662-4c09-b37f-bfa8cab45bd6/mariadb-client-6-cell1/0.log" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.546420 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-cell1"] Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.551547 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-cell1"] Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.603220 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ckkd\" (UniqueName: \"kubernetes.io/projected/63dc6cf4-b662-4c09-b37f-bfa8cab45bd6-kube-api-access-2ckkd\") pod \"63dc6cf4-b662-4c09-b37f-bfa8cab45bd6\" (UID: \"63dc6cf4-b662-4c09-b37f-bfa8cab45bd6\") " Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.611208 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63dc6cf4-b662-4c09-b37f-bfa8cab45bd6-kube-api-access-2ckkd" (OuterVolumeSpecName: "kube-api-access-2ckkd") pod "63dc6cf4-b662-4c09-b37f-bfa8cab45bd6" (UID: "63dc6cf4-b662-4c09-b37f-bfa8cab45bd6"). InnerVolumeSpecName "kube-api-access-2ckkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.706395 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ckkd\" (UniqueName: \"kubernetes.io/projected/63dc6cf4-b662-4c09-b37f-bfa8cab45bd6-kube-api-access-2ckkd\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.727066 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-cell1"] Oct 14 08:49:24 crc kubenswrapper[5058]: E1014 08:49:24.727535 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63dc6cf4-b662-4c09-b37f-bfa8cab45bd6" containerName="mariadb-client-6-cell1" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.727571 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="63dc6cf4-b662-4c09-b37f-bfa8cab45bd6" containerName="mariadb-client-6-cell1" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.727862 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="63dc6cf4-b662-4c09-b37f-bfa8cab45bd6" containerName="mariadb-client-6-cell1" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.730467 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-cell1" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.743540 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-cell1"] Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.808176 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcssn\" (UniqueName: \"kubernetes.io/projected/e4a81757-2c3b-4713-893f-a0e692c0221b-kube-api-access-kcssn\") pod \"mariadb-client-7-cell1\" (UID: \"e4a81757-2c3b-4713-893f-a0e692c0221b\") " pod="openstack/mariadb-client-7-cell1" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.810889 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63dc6cf4-b662-4c09-b37f-bfa8cab45bd6" path="/var/lib/kubelet/pods/63dc6cf4-b662-4c09-b37f-bfa8cab45bd6/volumes" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.909786 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcssn\" (UniqueName: \"kubernetes.io/projected/e4a81757-2c3b-4713-893f-a0e692c0221b-kube-api-access-kcssn\") pod \"mariadb-client-7-cell1\" (UID: \"e4a81757-2c3b-4713-893f-a0e692c0221b\") " pod="openstack/mariadb-client-7-cell1" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.934393 5058 scope.go:117] "RemoveContainer" containerID="6a1f77c3fcf2d388f9eaa845c2c7c8449d92bb47b70c8c68b43edf8e5d985442" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.934534 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-cell1" Oct 14 08:49:24 crc kubenswrapper[5058]: I1014 08:49:24.937695 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcssn\" (UniqueName: \"kubernetes.io/projected/e4a81757-2c3b-4713-893f-a0e692c0221b-kube-api-access-kcssn\") pod \"mariadb-client-7-cell1\" (UID: \"e4a81757-2c3b-4713-893f-a0e692c0221b\") " pod="openstack/mariadb-client-7-cell1" Oct 14 08:49:25 crc kubenswrapper[5058]: I1014 08:49:25.065422 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-cell1" Oct 14 08:49:25 crc kubenswrapper[5058]: I1014 08:49:25.719827 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-cell1"] Oct 14 08:49:25 crc kubenswrapper[5058]: W1014 08:49:25.725589 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4a81757_2c3b_4713_893f_a0e692c0221b.slice/crio-81e2e15053576a7dbc4fe3566b4e35ca0d2dcb6858f3546074a091cc0dde45ce WatchSource:0}: Error finding container 81e2e15053576a7dbc4fe3566b4e35ca0d2dcb6858f3546074a091cc0dde45ce: Status 404 returned error can't find the container with id 81e2e15053576a7dbc4fe3566b4e35ca0d2dcb6858f3546074a091cc0dde45ce Oct 14 08:49:25 crc kubenswrapper[5058]: I1014 08:49:25.945515 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-cell1" event={"ID":"e4a81757-2c3b-4713-893f-a0e692c0221b","Type":"ContainerStarted","Data":"d979c4df3e3b4bfd49a5bef0b05d5d5c0daf7bfce2930f8f8e3b1a40e63b89e8"} Oct 14 08:49:25 crc kubenswrapper[5058]: I1014 08:49:25.945861 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-cell1" event={"ID":"e4a81757-2c3b-4713-893f-a0e692c0221b","Type":"ContainerStarted","Data":"81e2e15053576a7dbc4fe3566b4e35ca0d2dcb6858f3546074a091cc0dde45ce"} Oct 14 08:49:26 crc kubenswrapper[5058]: I1014 08:49:26.959319 5058 generic.go:334] "Generic (PLEG): container finished" podID="e4a81757-2c3b-4713-893f-a0e692c0221b" containerID="d979c4df3e3b4bfd49a5bef0b05d5d5c0daf7bfce2930f8f8e3b1a40e63b89e8" exitCode=0 Oct 14 08:49:26 crc kubenswrapper[5058]: I1014 08:49:26.959379 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-cell1" event={"ID":"e4a81757-2c3b-4713-893f-a0e692c0221b","Type":"ContainerDied","Data":"d979c4df3e3b4bfd49a5bef0b05d5d5c0daf7bfce2930f8f8e3b1a40e63b89e8"} Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.438550 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-cell1" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.463627 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-cell1_e4a81757-2c3b-4713-893f-a0e692c0221b/mariadb-client-7-cell1/0.log" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.490269 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-cell1"] Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.496692 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-cell1"] Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.573717 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcssn\" (UniqueName: \"kubernetes.io/projected/e4a81757-2c3b-4713-893f-a0e692c0221b-kube-api-access-kcssn\") pod \"e4a81757-2c3b-4713-893f-a0e692c0221b\" (UID: \"e4a81757-2c3b-4713-893f-a0e692c0221b\") " Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.582671 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a81757-2c3b-4713-893f-a0e692c0221b-kube-api-access-kcssn" (OuterVolumeSpecName: "kube-api-access-kcssn") pod "e4a81757-2c3b-4713-893f-a0e692c0221b" (UID: "e4a81757-2c3b-4713-893f-a0e692c0221b"). InnerVolumeSpecName "kube-api-access-kcssn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.664124 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:27 crc kubenswrapper[5058]: E1014 08:49:27.664766 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a81757-2c3b-4713-893f-a0e692c0221b" containerName="mariadb-client-7-cell1" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.664838 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a81757-2c3b-4713-893f-a0e692c0221b" containerName="mariadb-client-7-cell1" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.665208 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a81757-2c3b-4713-893f-a0e692c0221b" containerName="mariadb-client-7-cell1" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.667012 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.675990 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcssn\" (UniqueName: \"kubernetes.io/projected/e4a81757-2c3b-4713-893f-a0e692c0221b-kube-api-access-kcssn\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.680613 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.778741 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgmh\" (UniqueName: \"kubernetes.io/projected/fee4f12c-dd66-4037-b5cf-b25e07a8084f-kube-api-access-qlgmh\") pod \"mariadb-client-2\" (UID: \"fee4f12c-dd66-4037-b5cf-b25e07a8084f\") " pod="openstack/mariadb-client-2" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.881458 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlgmh\" (UniqueName: \"kubernetes.io/projected/fee4f12c-dd66-4037-b5cf-b25e07a8084f-kube-api-access-qlgmh\") pod \"mariadb-client-2\" (UID: \"fee4f12c-dd66-4037-b5cf-b25e07a8084f\") " pod="openstack/mariadb-client-2" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.916376 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlgmh\" (UniqueName: \"kubernetes.io/projected/fee4f12c-dd66-4037-b5cf-b25e07a8084f-kube-api-access-qlgmh\") pod \"mariadb-client-2\" (UID: \"fee4f12c-dd66-4037-b5cf-b25e07a8084f\") " pod="openstack/mariadb-client-2" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.971791 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e2e15053576a7dbc4fe3566b4e35ca0d2dcb6858f3546074a091cc0dde45ce" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.972973 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-cell1" Oct 14 08:49:27 crc kubenswrapper[5058]: I1014 08:49:27.998866 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:28 crc kubenswrapper[5058]: I1014 08:49:28.415781 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:28 crc kubenswrapper[5058]: W1014 08:49:28.421216 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfee4f12c_dd66_4037_b5cf_b25e07a8084f.slice/crio-eb4197a8d2e062b63059a9ae215a2d9bbbc96394454a9df6de736c585110a6a2 WatchSource:0}: Error finding container eb4197a8d2e062b63059a9ae215a2d9bbbc96394454a9df6de736c585110a6a2: Status 404 returned error can't find the container with id eb4197a8d2e062b63059a9ae215a2d9bbbc96394454a9df6de736c585110a6a2 Oct 14 08:49:28 crc kubenswrapper[5058]: I1014 08:49:28.802482 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a81757-2c3b-4713-893f-a0e692c0221b" path="/var/lib/kubelet/pods/e4a81757-2c3b-4713-893f-a0e692c0221b/volumes" Oct 14 08:49:28 crc kubenswrapper[5058]: I1014 08:49:28.984625 5058 generic.go:334] "Generic (PLEG): container finished" podID="fee4f12c-dd66-4037-b5cf-b25e07a8084f" containerID="32247a05928626708980ce11641ff712a6300ae3b92344dd7e3f54f26208a953" exitCode=1 Oct 14 08:49:28 crc kubenswrapper[5058]: I1014 08:49:28.984686 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"fee4f12c-dd66-4037-b5cf-b25e07a8084f","Type":"ContainerDied","Data":"32247a05928626708980ce11641ff712a6300ae3b92344dd7e3f54f26208a953"} Oct 14 08:49:28 crc kubenswrapper[5058]: I1014 08:49:28.984728 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"fee4f12c-dd66-4037-b5cf-b25e07a8084f","Type":"ContainerStarted","Data":"eb4197a8d2e062b63059a9ae215a2d9bbbc96394454a9df6de736c585110a6a2"} Oct 14 08:49:30 crc kubenswrapper[5058]: I1014 08:49:30.477008 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:30 crc kubenswrapper[5058]: I1014 08:49:30.496738 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_fee4f12c-dd66-4037-b5cf-b25e07a8084f/mariadb-client-2/0.log" Oct 14 08:49:30 crc kubenswrapper[5058]: I1014 08:49:30.528916 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:30 crc kubenswrapper[5058]: I1014 08:49:30.539324 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:30 crc kubenswrapper[5058]: I1014 08:49:30.631977 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlgmh\" (UniqueName: \"kubernetes.io/projected/fee4f12c-dd66-4037-b5cf-b25e07a8084f-kube-api-access-qlgmh\") pod \"fee4f12c-dd66-4037-b5cf-b25e07a8084f\" (UID: \"fee4f12c-dd66-4037-b5cf-b25e07a8084f\") " Oct 14 08:49:30 crc kubenswrapper[5058]: I1014 08:49:30.641491 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee4f12c-dd66-4037-b5cf-b25e07a8084f-kube-api-access-qlgmh" (OuterVolumeSpecName: "kube-api-access-qlgmh") pod "fee4f12c-dd66-4037-b5cf-b25e07a8084f" (UID: "fee4f12c-dd66-4037-b5cf-b25e07a8084f"). InnerVolumeSpecName "kube-api-access-qlgmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:30 crc kubenswrapper[5058]: I1014 08:49:30.733825 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlgmh\" (UniqueName: \"kubernetes.io/projected/fee4f12c-dd66-4037-b5cf-b25e07a8084f-kube-api-access-qlgmh\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:30 crc kubenswrapper[5058]: I1014 08:49:30.806620 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee4f12c-dd66-4037-b5cf-b25e07a8084f" path="/var/lib/kubelet/pods/fee4f12c-dd66-4037-b5cf-b25e07a8084f/volumes" Oct 14 08:49:31 crc kubenswrapper[5058]: I1014 08:49:31.006566 5058 scope.go:117] "RemoveContainer" containerID="32247a05928626708980ce11641ff712a6300ae3b92344dd7e3f54f26208a953" Oct 14 08:49:31 crc kubenswrapper[5058]: I1014 08:49:31.006594 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.576350 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-cell2"] Oct 14 08:49:33 crc kubenswrapper[5058]: E1014 08:49:33.576968 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee4f12c-dd66-4037-b5cf-b25e07a8084f" containerName="mariadb-client-2" Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.576983 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee4f12c-dd66-4037-b5cf-b25e07a8084f" containerName="mariadb-client-2" Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.577182 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee4f12c-dd66-4037-b5cf-b25e07a8084f" containerName="mariadb-client-2" Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.577779 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-cell2" Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.580060 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vjsll" Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.588132 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-cell2"] Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.708266 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qlls\" (UniqueName: \"kubernetes.io/projected/775061c6-9e9a-4422-9112-485cc52743c6-kube-api-access-2qlls\") pod \"mariadb-client-5-cell2\" (UID: \"775061c6-9e9a-4422-9112-485cc52743c6\") " pod="openstack/mariadb-client-5-cell2" Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.814781 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qlls\" (UniqueName: \"kubernetes.io/projected/775061c6-9e9a-4422-9112-485cc52743c6-kube-api-access-2qlls\") pod \"mariadb-client-5-cell2\" (UID: \"775061c6-9e9a-4422-9112-485cc52743c6\") " pod="openstack/mariadb-client-5-cell2" Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.844345 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qlls\" (UniqueName: \"kubernetes.io/projected/775061c6-9e9a-4422-9112-485cc52743c6-kube-api-access-2qlls\") pod \"mariadb-client-5-cell2\" (UID: \"775061c6-9e9a-4422-9112-485cc52743c6\") " pod="openstack/mariadb-client-5-cell2" Oct 14 08:49:33 crc kubenswrapper[5058]: I1014 08:49:33.891518 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-cell2" Oct 14 08:49:34 crc kubenswrapper[5058]: I1014 08:49:34.282036 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-cell2"] Oct 14 08:49:34 crc kubenswrapper[5058]: I1014 08:49:34.790466 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:49:34 crc kubenswrapper[5058]: E1014 08:49:34.791258 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:49:35 crc kubenswrapper[5058]: I1014 08:49:35.067326 5058 generic.go:334] "Generic (PLEG): container finished" podID="775061c6-9e9a-4422-9112-485cc52743c6" containerID="f0af85aabe2b1d47bce78d82aa5f6890d8464d6898d6b4140acb95117c2455bc" exitCode=0 Oct 14 08:49:35 crc kubenswrapper[5058]: I1014 08:49:35.067398 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-cell2" event={"ID":"775061c6-9e9a-4422-9112-485cc52743c6","Type":"ContainerDied","Data":"f0af85aabe2b1d47bce78d82aa5f6890d8464d6898d6b4140acb95117c2455bc"} Oct 14 08:49:35 crc kubenswrapper[5058]: I1014 08:49:35.067439 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-cell2" event={"ID":"775061c6-9e9a-4422-9112-485cc52743c6","Type":"ContainerStarted","Data":"e76e5e87ab0fdc5849ef2c60784fa61d2fa0b519daa21ea57e7f98f20b8279bf"} Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.561463 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-cell2" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.587394 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-cell2_775061c6-9e9a-4422-9112-485cc52743c6/mariadb-client-5-cell2/0.log" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.626941 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-cell2"] Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.631995 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-cell2"] Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.668105 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qlls\" (UniqueName: \"kubernetes.io/projected/775061c6-9e9a-4422-9112-485cc52743c6-kube-api-access-2qlls\") pod \"775061c6-9e9a-4422-9112-485cc52743c6\" (UID: \"775061c6-9e9a-4422-9112-485cc52743c6\") " Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.673539 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775061c6-9e9a-4422-9112-485cc52743c6-kube-api-access-2qlls" (OuterVolumeSpecName: "kube-api-access-2qlls") pod "775061c6-9e9a-4422-9112-485cc52743c6" (UID: "775061c6-9e9a-4422-9112-485cc52743c6"). InnerVolumeSpecName "kube-api-access-2qlls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.742207 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-cell2"] Oct 14 08:49:36 crc kubenswrapper[5058]: E1014 08:49:36.742838 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775061c6-9e9a-4422-9112-485cc52743c6" containerName="mariadb-client-5-cell2" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.742868 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="775061c6-9e9a-4422-9112-485cc52743c6" containerName="mariadb-client-5-cell2" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.743182 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="775061c6-9e9a-4422-9112-485cc52743c6" containerName="mariadb-client-5-cell2" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.744138 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-cell2" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.747220 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-cell2"] Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.772859 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9565\" (UniqueName: \"kubernetes.io/projected/7a60a5cc-236f-4193-8e00-8416ff394689-kube-api-access-n9565\") pod \"mariadb-client-6-cell2\" (UID: \"7a60a5cc-236f-4193-8e00-8416ff394689\") " pod="openstack/mariadb-client-6-cell2" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.773033 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qlls\" (UniqueName: \"kubernetes.io/projected/775061c6-9e9a-4422-9112-485cc52743c6-kube-api-access-2qlls\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.802841 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775061c6-9e9a-4422-9112-485cc52743c6" path="/var/lib/kubelet/pods/775061c6-9e9a-4422-9112-485cc52743c6/volumes" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.875168 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9565\" (UniqueName: \"kubernetes.io/projected/7a60a5cc-236f-4193-8e00-8416ff394689-kube-api-access-n9565\") pod \"mariadb-client-6-cell2\" (UID: \"7a60a5cc-236f-4193-8e00-8416ff394689\") " pod="openstack/mariadb-client-6-cell2" Oct 14 08:49:36 crc kubenswrapper[5058]: I1014 08:49:36.895198 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9565\" (UniqueName: \"kubernetes.io/projected/7a60a5cc-236f-4193-8e00-8416ff394689-kube-api-access-n9565\") pod \"mariadb-client-6-cell2\" (UID: \"7a60a5cc-236f-4193-8e00-8416ff394689\") " pod="openstack/mariadb-client-6-cell2" Oct 14 08:49:37 crc kubenswrapper[5058]: I1014 08:49:37.074553 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-cell2" Oct 14 08:49:37 crc kubenswrapper[5058]: I1014 08:49:37.090626 5058 scope.go:117] "RemoveContainer" containerID="f0af85aabe2b1d47bce78d82aa5f6890d8464d6898d6b4140acb95117c2455bc" Oct 14 08:49:37 crc kubenswrapper[5058]: I1014 08:49:37.090719 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-cell2" Oct 14 08:49:37 crc kubenswrapper[5058]: I1014 08:49:37.460718 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-cell2"] Oct 14 08:49:38 crc kubenswrapper[5058]: I1014 08:49:38.117514 5058 generic.go:334] "Generic (PLEG): container finished" podID="7a60a5cc-236f-4193-8e00-8416ff394689" containerID="5e24d176c4d038f05fdd2437e8c8d179745499cdd65d3115520fc2004e37d610" exitCode=0 Oct 14 08:49:38 crc kubenswrapper[5058]: I1014 08:49:38.117834 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-cell2" event={"ID":"7a60a5cc-236f-4193-8e00-8416ff394689","Type":"ContainerDied","Data":"5e24d176c4d038f05fdd2437e8c8d179745499cdd65d3115520fc2004e37d610"} Oct 14 08:49:38 crc kubenswrapper[5058]: I1014 08:49:38.117866 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-cell2" event={"ID":"7a60a5cc-236f-4193-8e00-8416ff394689","Type":"ContainerStarted","Data":"c457b21abd3ace3ad95b07518984a679e383924515621960fbc3e332ea5fa167"} Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.606989 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-cell2" Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.642123 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-cell2_7a60a5cc-236f-4193-8e00-8416ff394689/mariadb-client-6-cell2/0.log" Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.650714 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9565\" (UniqueName: \"kubernetes.io/projected/7a60a5cc-236f-4193-8e00-8416ff394689-kube-api-access-n9565\") pod \"7a60a5cc-236f-4193-8e00-8416ff394689\" (UID: \"7a60a5cc-236f-4193-8e00-8416ff394689\") " Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.667151 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a60a5cc-236f-4193-8e00-8416ff394689-kube-api-access-n9565" (OuterVolumeSpecName: "kube-api-access-n9565") pod "7a60a5cc-236f-4193-8e00-8416ff394689" (UID: "7a60a5cc-236f-4193-8e00-8416ff394689"). InnerVolumeSpecName "kube-api-access-n9565". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.683530 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-cell2"] Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.692744 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-cell2"] Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.753449 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9565\" (UniqueName: \"kubernetes.io/projected/7a60a5cc-236f-4193-8e00-8416ff394689-kube-api-access-n9565\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.879283 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-cell2"] Oct 14 08:49:39 crc kubenswrapper[5058]: E1014 08:49:39.880284 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a60a5cc-236f-4193-8e00-8416ff394689" containerName="mariadb-client-6-cell2" Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.880503 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a60a5cc-236f-4193-8e00-8416ff394689" containerName="mariadb-client-6-cell2" Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.881063 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a60a5cc-236f-4193-8e00-8416ff394689" containerName="mariadb-client-6-cell2" Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.882475 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-cell2" Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.887607 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-cell2"] Oct 14 08:49:39 crc kubenswrapper[5058]: I1014 08:49:39.956361 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcgd\" (UniqueName: \"kubernetes.io/projected/26436084-43d7-4729-a6ee-39e170395aba-kube-api-access-vzcgd\") pod \"mariadb-client-7-cell2\" (UID: \"26436084-43d7-4729-a6ee-39e170395aba\") " pod="openstack/mariadb-client-7-cell2" Oct 14 08:49:40 crc kubenswrapper[5058]: I1014 08:49:40.058905 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcgd\" (UniqueName: \"kubernetes.io/projected/26436084-43d7-4729-a6ee-39e170395aba-kube-api-access-vzcgd\") pod \"mariadb-client-7-cell2\" (UID: \"26436084-43d7-4729-a6ee-39e170395aba\") " pod="openstack/mariadb-client-7-cell2" Oct 14 08:49:40 crc kubenswrapper[5058]: I1014 08:49:40.088114 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcgd\" (UniqueName: \"kubernetes.io/projected/26436084-43d7-4729-a6ee-39e170395aba-kube-api-access-vzcgd\") pod \"mariadb-client-7-cell2\" (UID: \"26436084-43d7-4729-a6ee-39e170395aba\") " pod="openstack/mariadb-client-7-cell2" Oct 14 08:49:40 crc kubenswrapper[5058]: I1014 08:49:40.141622 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c457b21abd3ace3ad95b07518984a679e383924515621960fbc3e332ea5fa167" Oct 14 08:49:40 crc kubenswrapper[5058]: I1014 08:49:40.141760 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-cell2" Oct 14 08:49:40 crc kubenswrapper[5058]: I1014 08:49:40.211416 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-cell2" Oct 14 08:49:40 crc kubenswrapper[5058]: I1014 08:49:40.691096 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-cell2"] Oct 14 08:49:40 crc kubenswrapper[5058]: I1014 08:49:40.803713 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a60a5cc-236f-4193-8e00-8416ff394689" path="/var/lib/kubelet/pods/7a60a5cc-236f-4193-8e00-8416ff394689/volumes" Oct 14 08:49:41 crc kubenswrapper[5058]: I1014 08:49:41.153411 5058 generic.go:334] "Generic (PLEG): container finished" podID="26436084-43d7-4729-a6ee-39e170395aba" containerID="29d56be66799dc9afe008abadd93f888d5a355d671eddfd50c62bd17ac02025e" exitCode=0 Oct 14 08:49:41 crc kubenswrapper[5058]: I1014 08:49:41.153473 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-cell2" event={"ID":"26436084-43d7-4729-a6ee-39e170395aba","Type":"ContainerDied","Data":"29d56be66799dc9afe008abadd93f888d5a355d671eddfd50c62bd17ac02025e"} Oct 14 08:49:41 crc kubenswrapper[5058]: I1014 08:49:41.153527 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-cell2" event={"ID":"26436084-43d7-4729-a6ee-39e170395aba","Type":"ContainerStarted","Data":"4c40876bb53f8080481c7df7b05bb682540c76ac0c44a343008e95ef53cdb0c6"} Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.572738 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-cell2" Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.593597 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-cell2_26436084-43d7-4729-a6ee-39e170395aba/mariadb-client-7-cell2/0.log" Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.641910 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-cell2"] Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.655913 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-cell2"] Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.703289 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzcgd\" (UniqueName: \"kubernetes.io/projected/26436084-43d7-4729-a6ee-39e170395aba-kube-api-access-vzcgd\") pod \"26436084-43d7-4729-a6ee-39e170395aba\" (UID: \"26436084-43d7-4729-a6ee-39e170395aba\") " Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.713372 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26436084-43d7-4729-a6ee-39e170395aba-kube-api-access-vzcgd" (OuterVolumeSpecName: "kube-api-access-vzcgd") pod "26436084-43d7-4729-a6ee-39e170395aba" (UID: "26436084-43d7-4729-a6ee-39e170395aba"). InnerVolumeSpecName "kube-api-access-vzcgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.801072 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26436084-43d7-4729-a6ee-39e170395aba" path="/var/lib/kubelet/pods/26436084-43d7-4729-a6ee-39e170395aba/volumes" Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.805212 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzcgd\" (UniqueName: \"kubernetes.io/projected/26436084-43d7-4729-a6ee-39e170395aba-kube-api-access-vzcgd\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.839017 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:42 crc kubenswrapper[5058]: E1014 08:49:42.839454 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26436084-43d7-4729-a6ee-39e170395aba" containerName="mariadb-client-7-cell2" Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.839478 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="26436084-43d7-4729-a6ee-39e170395aba" containerName="mariadb-client-7-cell2" Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.839662 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="26436084-43d7-4729-a6ee-39e170395aba" containerName="mariadb-client-7-cell2" Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.840436 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.847947 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:42 crc kubenswrapper[5058]: I1014 08:49:42.914766 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45p8r\" (UniqueName: \"kubernetes.io/projected/b7538531-9d93-4232-972c-cbef65f08145-kube-api-access-45p8r\") pod \"mariadb-client-2\" (UID: \"b7538531-9d93-4232-972c-cbef65f08145\") " pod="openstack/mariadb-client-2" Oct 14 08:49:43 crc kubenswrapper[5058]: I1014 08:49:43.017863 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45p8r\" (UniqueName: \"kubernetes.io/projected/b7538531-9d93-4232-972c-cbef65f08145-kube-api-access-45p8r\") pod \"mariadb-client-2\" (UID: \"b7538531-9d93-4232-972c-cbef65f08145\") " pod="openstack/mariadb-client-2" Oct 14 08:49:43 crc kubenswrapper[5058]: I1014 08:49:43.048043 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45p8r\" (UniqueName: \"kubernetes.io/projected/b7538531-9d93-4232-972c-cbef65f08145-kube-api-access-45p8r\") pod \"mariadb-client-2\" (UID: \"b7538531-9d93-4232-972c-cbef65f08145\") " pod="openstack/mariadb-client-2" Oct 14 08:49:43 crc kubenswrapper[5058]: I1014 08:49:43.172190 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:43 crc kubenswrapper[5058]: I1014 08:49:43.177470 5058 scope.go:117] "RemoveContainer" containerID="29d56be66799dc9afe008abadd93f888d5a355d671eddfd50c62bd17ac02025e" Oct 14 08:49:43 crc kubenswrapper[5058]: I1014 08:49:43.177536 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-cell2" Oct 14 08:49:43 crc kubenswrapper[5058]: I1014 08:49:43.556596 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:44 crc kubenswrapper[5058]: I1014 08:49:44.190355 5058 generic.go:334] "Generic (PLEG): container finished" podID="b7538531-9d93-4232-972c-cbef65f08145" containerID="54f59e1c3e40a3d5721244663ed9af819c00d73bd318b0ce1d45a74ef2dd29bf" exitCode=1 Oct 14 08:49:44 crc kubenswrapper[5058]: I1014 08:49:44.190451 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"b7538531-9d93-4232-972c-cbef65f08145","Type":"ContainerDied","Data":"54f59e1c3e40a3d5721244663ed9af819c00d73bd318b0ce1d45a74ef2dd29bf"} Oct 14 08:49:44 crc kubenswrapper[5058]: I1014 08:49:44.190817 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"b7538531-9d93-4232-972c-cbef65f08145","Type":"ContainerStarted","Data":"b2061938d468069439aaf79bae2a995814515ba61612f3fb81e40708c55aef64"} Oct 14 08:49:45 crc kubenswrapper[5058]: I1014 08:49:45.639699 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:45 crc kubenswrapper[5058]: I1014 08:49:45.659850 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_b7538531-9d93-4232-972c-cbef65f08145/mariadb-client-2/0.log" Oct 14 08:49:45 crc kubenswrapper[5058]: I1014 08:49:45.665417 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45p8r\" (UniqueName: \"kubernetes.io/projected/b7538531-9d93-4232-972c-cbef65f08145-kube-api-access-45p8r\") pod \"b7538531-9d93-4232-972c-cbef65f08145\" (UID: \"b7538531-9d93-4232-972c-cbef65f08145\") " Oct 14 08:49:45 crc kubenswrapper[5058]: I1014 08:49:45.675041 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7538531-9d93-4232-972c-cbef65f08145-kube-api-access-45p8r" (OuterVolumeSpecName: "kube-api-access-45p8r") pod "b7538531-9d93-4232-972c-cbef65f08145" (UID: "b7538531-9d93-4232-972c-cbef65f08145"). InnerVolumeSpecName "kube-api-access-45p8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:49:45 crc kubenswrapper[5058]: I1014 08:49:45.696771 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:45 crc kubenswrapper[5058]: I1014 08:49:45.704298 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Oct 14 08:49:45 crc kubenswrapper[5058]: I1014 08:49:45.767121 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45p8r\" (UniqueName: \"kubernetes.io/projected/b7538531-9d93-4232-972c-cbef65f08145-kube-api-access-45p8r\") on node \"crc\" DevicePath \"\"" Oct 14 08:49:46 crc kubenswrapper[5058]: I1014 08:49:46.218272 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2061938d468069439aaf79bae2a995814515ba61612f3fb81e40708c55aef64" Oct 14 08:49:46 crc kubenswrapper[5058]: I1014 08:49:46.218338 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 14 08:49:46 crc kubenswrapper[5058]: I1014 08:49:46.803626 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7538531-9d93-4232-972c-cbef65f08145" path="/var/lib/kubelet/pods/b7538531-9d93-4232-972c-cbef65f08145/volumes" Oct 14 08:49:47 crc kubenswrapper[5058]: I1014 08:49:47.790024 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:49:47 crc kubenswrapper[5058]: E1014 08:49:47.790409 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:49:58 crc kubenswrapper[5058]: I1014 08:49:58.790907 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:49:58 crc kubenswrapper[5058]: E1014 08:49:58.791740 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:50:03 crc kubenswrapper[5058]: I1014 08:50:03.950465 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2ccdl"] Oct 14 08:50:03 crc kubenswrapper[5058]: E1014 08:50:03.951518 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7538531-9d93-4232-972c-cbef65f08145" containerName="mariadb-client-2" Oct 14 08:50:03 crc kubenswrapper[5058]: I1014 08:50:03.951538 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7538531-9d93-4232-972c-cbef65f08145" containerName="mariadb-client-2" Oct 14 08:50:03 crc kubenswrapper[5058]: I1014 08:50:03.951854 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7538531-9d93-4232-972c-cbef65f08145" containerName="mariadb-client-2" Oct 14 08:50:03 crc kubenswrapper[5058]: I1014 08:50:03.953759 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:03 crc kubenswrapper[5058]: I1014 08:50:03.965745 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ccdl"] Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.003061 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-catalog-content\") pod \"redhat-operators-2ccdl\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.003154 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-utilities\") pod \"redhat-operators-2ccdl\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.003251 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7xv8\" (UniqueName: \"kubernetes.io/projected/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-kube-api-access-w7xv8\") pod \"redhat-operators-2ccdl\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.104683 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-catalog-content\") pod \"redhat-operators-2ccdl\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.104745 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-utilities\") pod \"redhat-operators-2ccdl\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.104832 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7xv8\" (UniqueName: \"kubernetes.io/projected/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-kube-api-access-w7xv8\") pod \"redhat-operators-2ccdl\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.105468 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-catalog-content\") pod \"redhat-operators-2ccdl\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.105663 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-utilities\") pod \"redhat-operators-2ccdl\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.135744 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7xv8\" (UniqueName: \"kubernetes.io/projected/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-kube-api-access-w7xv8\") pod \"redhat-operators-2ccdl\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.301743 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:04 crc kubenswrapper[5058]: I1014 08:50:04.820564 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ccdl"] Oct 14 08:50:05 crc kubenswrapper[5058]: I1014 08:50:05.448975 5058 generic.go:334] "Generic (PLEG): container finished" podID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerID="c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71" exitCode=0 Oct 14 08:50:05 crc kubenswrapper[5058]: I1014 08:50:05.449068 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ccdl" event={"ID":"183f7f27-390d-4afa-aff0-cdfbedbe5d2b","Type":"ContainerDied","Data":"c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71"} Oct 14 08:50:05 crc kubenswrapper[5058]: I1014 08:50:05.449170 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ccdl" event={"ID":"183f7f27-390d-4afa-aff0-cdfbedbe5d2b","Type":"ContainerStarted","Data":"7be17afdb46ce4b44d04d8da7e42f4eb69cbac12d689d3f9584ea06a780175e2"} Oct 14 08:50:07 crc kubenswrapper[5058]: I1014 08:50:07.472107 5058 generic.go:334] "Generic (PLEG): container finished" podID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerID="a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b" exitCode=0 Oct 14 08:50:07 crc kubenswrapper[5058]: I1014 08:50:07.472170 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ccdl" event={"ID":"183f7f27-390d-4afa-aff0-cdfbedbe5d2b","Type":"ContainerDied","Data":"a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b"} Oct 14 08:50:08 crc kubenswrapper[5058]: I1014 08:50:08.486741 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ccdl" event={"ID":"183f7f27-390d-4afa-aff0-cdfbedbe5d2b","Type":"ContainerStarted","Data":"ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0"} Oct 14 08:50:08 crc kubenswrapper[5058]: I1014 08:50:08.521263 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2ccdl" podStartSLOduration=2.8723571740000002 podStartE2EDuration="5.521233323s" podCreationTimestamp="2025-10-14 08:50:03 +0000 UTC" firstStartedPulling="2025-10-14 08:50:05.451447545 +0000 UTC m=+7353.362531381" lastFinishedPulling="2025-10-14 08:50:08.100323684 +0000 UTC m=+7356.011407530" observedRunningTime="2025-10-14 08:50:08.512151757 +0000 UTC m=+7356.423235633" watchObservedRunningTime="2025-10-14 08:50:08.521233323 +0000 UTC m=+7356.432317169" Oct 14 08:50:09 crc kubenswrapper[5058]: I1014 08:50:09.790664 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:50:09 crc kubenswrapper[5058]: E1014 08:50:09.791099 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:50:14 crc kubenswrapper[5058]: I1014 08:50:14.302435 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:14 crc kubenswrapper[5058]: I1014 08:50:14.303084 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:15 crc kubenswrapper[5058]: I1014 08:50:15.380854 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2ccdl" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerName="registry-server" probeResult="failure" output=< Oct 14 08:50:15 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 08:50:15 crc kubenswrapper[5058]: > Oct 14 08:50:22 crc kubenswrapper[5058]: I1014 08:50:22.800973 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:50:22 crc kubenswrapper[5058]: E1014 08:50:22.801747 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:50:24 crc kubenswrapper[5058]: I1014 08:50:24.382122 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:24 crc kubenswrapper[5058]: I1014 08:50:24.447421 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:24 crc kubenswrapper[5058]: I1014 08:50:24.622022 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ccdl"] Oct 14 08:50:25 crc kubenswrapper[5058]: I1014 08:50:25.686567 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2ccdl" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerName="registry-server" containerID="cri-o://ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0" gracePeriod=2 Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.257160 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.439498 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-utilities\") pod \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.439650 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-catalog-content\") pod \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.439870 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7xv8\" (UniqueName: \"kubernetes.io/projected/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-kube-api-access-w7xv8\") pod \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\" (UID: \"183f7f27-390d-4afa-aff0-cdfbedbe5d2b\") " Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.441773 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-utilities" (OuterVolumeSpecName: "utilities") pod "183f7f27-390d-4afa-aff0-cdfbedbe5d2b" (UID: "183f7f27-390d-4afa-aff0-cdfbedbe5d2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.449462 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-kube-api-access-w7xv8" (OuterVolumeSpecName: "kube-api-access-w7xv8") pod "183f7f27-390d-4afa-aff0-cdfbedbe5d2b" (UID: "183f7f27-390d-4afa-aff0-cdfbedbe5d2b"). InnerVolumeSpecName "kube-api-access-w7xv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.542390 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7xv8\" (UniqueName: \"kubernetes.io/projected/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-kube-api-access-w7xv8\") on node \"crc\" DevicePath \"\"" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.542445 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.551082 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "183f7f27-390d-4afa-aff0-cdfbedbe5d2b" (UID: "183f7f27-390d-4afa-aff0-cdfbedbe5d2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.643839 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/183f7f27-390d-4afa-aff0-cdfbedbe5d2b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.698093 5058 generic.go:334] "Generic (PLEG): container finished" podID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerID="ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0" exitCode=0 Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.698158 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ccdl" event={"ID":"183f7f27-390d-4afa-aff0-cdfbedbe5d2b","Type":"ContainerDied","Data":"ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0"} Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.698259 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ccdl" event={"ID":"183f7f27-390d-4afa-aff0-cdfbedbe5d2b","Type":"ContainerDied","Data":"7be17afdb46ce4b44d04d8da7e42f4eb69cbac12d689d3f9584ea06a780175e2"} Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.698275 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ccdl" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.698308 5058 scope.go:117] "RemoveContainer" containerID="ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.722156 5058 scope.go:117] "RemoveContainer" containerID="a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.737086 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ccdl"] Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.742224 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2ccdl"] Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.759430 5058 scope.go:117] "RemoveContainer" containerID="c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.804923 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" path="/var/lib/kubelet/pods/183f7f27-390d-4afa-aff0-cdfbedbe5d2b/volumes" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.808673 5058 scope.go:117] "RemoveContainer" containerID="ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0" Oct 14 08:50:26 crc kubenswrapper[5058]: E1014 08:50:26.809103 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0\": container with ID starting with ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0 not found: ID does not exist" containerID="ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.809145 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0"} err="failed to get container status \"ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0\": rpc error: code = NotFound desc = could not find container \"ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0\": container with ID starting with ef91123699f98e468bb5dd7121c940ecee7fb038891aa9596a6801ea77ed93e0 not found: ID does not exist" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.809198 5058 scope.go:117] "RemoveContainer" containerID="a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b" Oct 14 08:50:26 crc kubenswrapper[5058]: E1014 08:50:26.809578 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b\": container with ID starting with a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b not found: ID does not exist" containerID="a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.809629 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b"} err="failed to get container status \"a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b\": rpc error: code = NotFound desc = could not find container \"a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b\": container with ID starting with a04f0ea49265ed72c066e1ee3593b1ceabf236ccdc20bd63959f46c8c9d2369b not found: ID does not exist" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.809667 5058 scope.go:117] "RemoveContainer" containerID="c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71" Oct 14 08:50:26 crc kubenswrapper[5058]: E1014 08:50:26.810016 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71\": container with ID starting with c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71 not found: ID does not exist" containerID="c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71" Oct 14 08:50:26 crc kubenswrapper[5058]: I1014 08:50:26.810051 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71"} err="failed to get container status \"c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71\": rpc error: code = NotFound desc = could not find container \"c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71\": container with ID starting with c71df744ad1ee4115b8fa16de82576f761659b57041c817ebfe5e3db5d94eb71 not found: ID does not exist" Oct 14 08:50:33 crc kubenswrapper[5058]: I1014 08:50:33.790949 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:50:34 crc kubenswrapper[5058]: I1014 08:50:34.802880 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"1621207f350db180dc3c022749746475c52171ff861249e8bc80e83c5c095af6"} Oct 14 08:50:40 crc kubenswrapper[5058]: I1014 08:50:40.728679 5058 scope.go:117] "RemoveContainer" containerID="bfde8c282ab91184765e7ba26539137fb0c1ef36846f08cd06351e7ad1bb3191" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.107429 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bhnk7"] Oct 14 08:51:31 crc kubenswrapper[5058]: E1014 08:51:31.110863 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerName="extract-utilities" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.110906 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerName="extract-utilities" Oct 14 08:51:31 crc kubenswrapper[5058]: E1014 08:51:31.110930 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerName="registry-server" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.110943 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerName="registry-server" Oct 14 08:51:31 crc kubenswrapper[5058]: E1014 08:51:31.110977 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerName="extract-content" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.110988 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerName="extract-content" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.111256 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="183f7f27-390d-4afa-aff0-cdfbedbe5d2b" containerName="registry-server" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.113061 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.156656 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhnk7"] Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.231292 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjp4q\" (UniqueName: \"kubernetes.io/projected/50873888-1578-4335-8761-be76a3d76f6e-kube-api-access-zjp4q\") pod \"community-operators-bhnk7\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.231591 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-utilities\") pod \"community-operators-bhnk7\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.231777 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-catalog-content\") pod \"community-operators-bhnk7\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.333444 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjp4q\" (UniqueName: \"kubernetes.io/projected/50873888-1578-4335-8761-be76a3d76f6e-kube-api-access-zjp4q\") pod \"community-operators-bhnk7\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.333573 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-utilities\") pod \"community-operators-bhnk7\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.334137 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-utilities\") pod \"community-operators-bhnk7\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.334212 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-catalog-content\") pod \"community-operators-bhnk7\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.334524 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-catalog-content\") pod \"community-operators-bhnk7\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.358470 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjp4q\" (UniqueName: \"kubernetes.io/projected/50873888-1578-4335-8761-be76a3d76f6e-kube-api-access-zjp4q\") pod \"community-operators-bhnk7\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.461661 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:31 crc kubenswrapper[5058]: I1014 08:51:31.962185 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhnk7"] Oct 14 08:51:32 crc kubenswrapper[5058]: I1014 08:51:32.475474 5058 generic.go:334] "Generic (PLEG): container finished" podID="50873888-1578-4335-8761-be76a3d76f6e" containerID="99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6" exitCode=0 Oct 14 08:51:32 crc kubenswrapper[5058]: I1014 08:51:32.475666 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhnk7" event={"ID":"50873888-1578-4335-8761-be76a3d76f6e","Type":"ContainerDied","Data":"99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6"} Oct 14 08:51:32 crc kubenswrapper[5058]: I1014 08:51:32.476090 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhnk7" event={"ID":"50873888-1578-4335-8761-be76a3d76f6e","Type":"ContainerStarted","Data":"447cfd2277a2e6b2e7a733e1f5f0c520af4e095b58bb28ae59a6eb6c0e935d3b"} Oct 14 08:51:32 crc kubenswrapper[5058]: I1014 08:51:32.479014 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 08:51:34 crc kubenswrapper[5058]: I1014 08:51:34.504829 5058 generic.go:334] "Generic (PLEG): container finished" podID="50873888-1578-4335-8761-be76a3d76f6e" containerID="99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754" exitCode=0 Oct 14 08:51:34 crc kubenswrapper[5058]: I1014 08:51:34.504995 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhnk7" event={"ID":"50873888-1578-4335-8761-be76a3d76f6e","Type":"ContainerDied","Data":"99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754"} Oct 14 08:51:36 crc kubenswrapper[5058]: I1014 08:51:36.587690 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhnk7" event={"ID":"50873888-1578-4335-8761-be76a3d76f6e","Type":"ContainerStarted","Data":"c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20"} Oct 14 08:51:36 crc kubenswrapper[5058]: I1014 08:51:36.620768 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bhnk7" podStartSLOduration=2.853864078 podStartE2EDuration="5.620737012s" podCreationTimestamp="2025-10-14 08:51:31 +0000 UTC" firstStartedPulling="2025-10-14 08:51:32.478581835 +0000 UTC m=+7440.389665671" lastFinishedPulling="2025-10-14 08:51:35.245454769 +0000 UTC m=+7443.156538605" observedRunningTime="2025-10-14 08:51:36.616395905 +0000 UTC m=+7444.527479781" watchObservedRunningTime="2025-10-14 08:51:36.620737012 +0000 UTC m=+7444.531820818" Oct 14 08:51:41 crc kubenswrapper[5058]: I1014 08:51:41.462613 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:41 crc kubenswrapper[5058]: I1014 08:51:41.463041 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:41 crc kubenswrapper[5058]: I1014 08:51:41.544887 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:41 crc kubenswrapper[5058]: I1014 08:51:41.700049 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:41 crc kubenswrapper[5058]: I1014 08:51:41.795981 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhnk7"] Oct 14 08:51:43 crc kubenswrapper[5058]: I1014 08:51:43.655739 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bhnk7" podUID="50873888-1578-4335-8761-be76a3d76f6e" containerName="registry-server" containerID="cri-o://c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20" gracePeriod=2 Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.206064 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.324952 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-utilities\") pod \"50873888-1578-4335-8761-be76a3d76f6e\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.325045 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-catalog-content\") pod \"50873888-1578-4335-8761-be76a3d76f6e\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.325200 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjp4q\" (UniqueName: \"kubernetes.io/projected/50873888-1578-4335-8761-be76a3d76f6e-kube-api-access-zjp4q\") pod \"50873888-1578-4335-8761-be76a3d76f6e\" (UID: \"50873888-1578-4335-8761-be76a3d76f6e\") " Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.326157 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-utilities" (OuterVolumeSpecName: "utilities") pod "50873888-1578-4335-8761-be76a3d76f6e" (UID: "50873888-1578-4335-8761-be76a3d76f6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.326261 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.334292 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50873888-1578-4335-8761-be76a3d76f6e-kube-api-access-zjp4q" (OuterVolumeSpecName: "kube-api-access-zjp4q") pod "50873888-1578-4335-8761-be76a3d76f6e" (UID: "50873888-1578-4335-8761-be76a3d76f6e"). InnerVolumeSpecName "kube-api-access-zjp4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.388995 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50873888-1578-4335-8761-be76a3d76f6e" (UID: "50873888-1578-4335-8761-be76a3d76f6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.428332 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50873888-1578-4335-8761-be76a3d76f6e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.428357 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjp4q\" (UniqueName: \"kubernetes.io/projected/50873888-1578-4335-8761-be76a3d76f6e-kube-api-access-zjp4q\") on node \"crc\" DevicePath \"\"" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.670927 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhnk7" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.670962 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhnk7" event={"ID":"50873888-1578-4335-8761-be76a3d76f6e","Type":"ContainerDied","Data":"c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20"} Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.670887 5058 generic.go:334] "Generic (PLEG): container finished" podID="50873888-1578-4335-8761-be76a3d76f6e" containerID="c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20" exitCode=0 Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.671037 5058 scope.go:117] "RemoveContainer" containerID="c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.671100 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhnk7" event={"ID":"50873888-1578-4335-8761-be76a3d76f6e","Type":"ContainerDied","Data":"447cfd2277a2e6b2e7a733e1f5f0c520af4e095b58bb28ae59a6eb6c0e935d3b"} Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.704631 5058 scope.go:117] "RemoveContainer" containerID="99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.733555 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhnk7"] Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.755348 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bhnk7"] Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.768205 5058 scope.go:117] "RemoveContainer" containerID="99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.794338 5058 scope.go:117] "RemoveContainer" containerID="c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20" Oct 14 08:51:44 crc kubenswrapper[5058]: E1014 08:51:44.794748 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20\": container with ID starting with c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20 not found: ID does not exist" containerID="c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.794963 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20"} err="failed to get container status \"c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20\": rpc error: code = NotFound desc = could not find container \"c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20\": container with ID starting with c58d06096c0eb33cce9b07b32018107528c8d19eef78ec50c29ba188de062b20 not found: ID does not exist" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.795023 5058 scope.go:117] "RemoveContainer" containerID="99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754" Oct 14 08:51:44 crc kubenswrapper[5058]: E1014 08:51:44.795459 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754\": container with ID starting with 99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754 not found: ID does not exist" containerID="99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.795552 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754"} err="failed to get container status \"99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754\": rpc error: code = NotFound desc = could not find container \"99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754\": container with ID starting with 99edf4618b76b91bdce26fe8584867e845c23d88c02562e63653aba5a2d7e754 not found: ID does not exist" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.795622 5058 scope.go:117] "RemoveContainer" containerID="99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6" Oct 14 08:51:44 crc kubenswrapper[5058]: E1014 08:51:44.796239 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6\": container with ID starting with 99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6 not found: ID does not exist" containerID="99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.796272 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6"} err="failed to get container status \"99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6\": rpc error: code = NotFound desc = could not find container \"99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6\": container with ID starting with 99a09c48e258629f7eb38c7c4590f5969c50d21c8fb757c497de8342c4aaf5a6 not found: ID does not exist" Oct 14 08:51:44 crc kubenswrapper[5058]: I1014 08:51:44.808491 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50873888-1578-4335-8761-be76a3d76f6e" path="/var/lib/kubelet/pods/50873888-1578-4335-8761-be76a3d76f6e/volumes" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.361762 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xh2l9"] Oct 14 08:52:23 crc kubenswrapper[5058]: E1014 08:52:23.362950 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50873888-1578-4335-8761-be76a3d76f6e" containerName="extract-utilities" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.362976 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="50873888-1578-4335-8761-be76a3d76f6e" containerName="extract-utilities" Oct 14 08:52:23 crc kubenswrapper[5058]: E1014 08:52:23.363029 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50873888-1578-4335-8761-be76a3d76f6e" containerName="registry-server" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.363042 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="50873888-1578-4335-8761-be76a3d76f6e" containerName="registry-server" Oct 14 08:52:23 crc kubenswrapper[5058]: E1014 08:52:23.363073 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50873888-1578-4335-8761-be76a3d76f6e" containerName="extract-content" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.363086 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="50873888-1578-4335-8761-be76a3d76f6e" containerName="extract-content" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.363376 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="50873888-1578-4335-8761-be76a3d76f6e" containerName="registry-server" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.365886 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.385138 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xh2l9"] Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.494010 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-utilities\") pod \"certified-operators-xh2l9\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.494977 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gghb7\" (UniqueName: \"kubernetes.io/projected/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-kube-api-access-gghb7\") pod \"certified-operators-xh2l9\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.495093 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-catalog-content\") pod \"certified-operators-xh2l9\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.597279 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gghb7\" (UniqueName: \"kubernetes.io/projected/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-kube-api-access-gghb7\") pod \"certified-operators-xh2l9\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.597371 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-catalog-content\") pod \"certified-operators-xh2l9\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.597489 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-utilities\") pod \"certified-operators-xh2l9\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.598245 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-catalog-content\") pod \"certified-operators-xh2l9\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.598283 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-utilities\") pod \"certified-operators-xh2l9\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.628309 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gghb7\" (UniqueName: \"kubernetes.io/projected/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-kube-api-access-gghb7\") pod \"certified-operators-xh2l9\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:23 crc kubenswrapper[5058]: I1014 08:52:23.714348 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:24 crc kubenswrapper[5058]: I1014 08:52:24.258003 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xh2l9"] Oct 14 08:52:25 crc kubenswrapper[5058]: I1014 08:52:25.136176 5058 generic.go:334] "Generic (PLEG): container finished" podID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerID="d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174" exitCode=0 Oct 14 08:52:25 crc kubenswrapper[5058]: I1014 08:52:25.136244 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xh2l9" event={"ID":"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6","Type":"ContainerDied","Data":"d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174"} Oct 14 08:52:25 crc kubenswrapper[5058]: I1014 08:52:25.136286 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xh2l9" event={"ID":"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6","Type":"ContainerStarted","Data":"d81461fec1cf8ab19274ea21119be720cb3e7085e9d343063ced38d4dc7811c8"} Oct 14 08:52:26 crc kubenswrapper[5058]: I1014 08:52:26.150769 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xh2l9" event={"ID":"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6","Type":"ContainerStarted","Data":"c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060"} Oct 14 08:52:27 crc kubenswrapper[5058]: I1014 08:52:27.165956 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xh2l9" event={"ID":"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6","Type":"ContainerDied","Data":"c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060"} Oct 14 08:52:27 crc kubenswrapper[5058]: I1014 08:52:27.165733 5058 generic.go:334] "Generic (PLEG): container finished" podID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerID="c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060" exitCode=0 Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.183850 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xh2l9" event={"ID":"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6","Type":"ContainerStarted","Data":"5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611"} Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.209083 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xh2l9" podStartSLOduration=2.642158077 podStartE2EDuration="5.209059404s" podCreationTimestamp="2025-10-14 08:52:23 +0000 UTC" firstStartedPulling="2025-10-14 08:52:25.139874414 +0000 UTC m=+7493.050958260" lastFinishedPulling="2025-10-14 08:52:27.706775741 +0000 UTC m=+7495.617859587" observedRunningTime="2025-10-14 08:52:28.206630823 +0000 UTC m=+7496.117714669" watchObservedRunningTime="2025-10-14 08:52:28.209059404 +0000 UTC m=+7496.120143220" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.354164 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dsjrg"] Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.357176 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.368898 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsjrg"] Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.512583 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-catalog-content\") pod \"redhat-marketplace-dsjrg\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.512666 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42lr\" (UniqueName: \"kubernetes.io/projected/994e101b-9eb7-4b2c-a614-e9add02b276e-kube-api-access-l42lr\") pod \"redhat-marketplace-dsjrg\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.512832 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-utilities\") pod \"redhat-marketplace-dsjrg\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.614989 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-catalog-content\") pod \"redhat-marketplace-dsjrg\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.615046 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l42lr\" (UniqueName: \"kubernetes.io/projected/994e101b-9eb7-4b2c-a614-e9add02b276e-kube-api-access-l42lr\") pod \"redhat-marketplace-dsjrg\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.615114 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-utilities\") pod \"redhat-marketplace-dsjrg\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.615681 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-utilities\") pod \"redhat-marketplace-dsjrg\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.615932 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-catalog-content\") pod \"redhat-marketplace-dsjrg\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.647953 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l42lr\" (UniqueName: \"kubernetes.io/projected/994e101b-9eb7-4b2c-a614-e9add02b276e-kube-api-access-l42lr\") pod \"redhat-marketplace-dsjrg\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:28 crc kubenswrapper[5058]: I1014 08:52:28.681375 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:29 crc kubenswrapper[5058]: I1014 08:52:29.154905 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsjrg"] Oct 14 08:52:29 crc kubenswrapper[5058]: W1014 08:52:29.160377 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod994e101b_9eb7_4b2c_a614_e9add02b276e.slice/crio-7f5147cd116f935e632ea8c3ddcea539df7df5bf77f81d3ebf766d49c4f14a85 WatchSource:0}: Error finding container 7f5147cd116f935e632ea8c3ddcea539df7df5bf77f81d3ebf766d49c4f14a85: Status 404 returned error can't find the container with id 7f5147cd116f935e632ea8c3ddcea539df7df5bf77f81d3ebf766d49c4f14a85 Oct 14 08:52:29 crc kubenswrapper[5058]: I1014 08:52:29.195285 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsjrg" event={"ID":"994e101b-9eb7-4b2c-a614-e9add02b276e","Type":"ContainerStarted","Data":"7f5147cd116f935e632ea8c3ddcea539df7df5bf77f81d3ebf766d49c4f14a85"} Oct 14 08:52:30 crc kubenswrapper[5058]: I1014 08:52:30.210305 5058 generic.go:334] "Generic (PLEG): container finished" podID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerID="046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a" exitCode=0 Oct 14 08:52:30 crc kubenswrapper[5058]: I1014 08:52:30.210361 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsjrg" event={"ID":"994e101b-9eb7-4b2c-a614-e9add02b276e","Type":"ContainerDied","Data":"046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a"} Oct 14 08:52:32 crc kubenswrapper[5058]: I1014 08:52:32.234376 5058 generic.go:334] "Generic (PLEG): container finished" podID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerID="b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe" exitCode=0 Oct 14 08:52:32 crc kubenswrapper[5058]: I1014 08:52:32.234470 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsjrg" event={"ID":"994e101b-9eb7-4b2c-a614-e9add02b276e","Type":"ContainerDied","Data":"b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe"} Oct 14 08:52:33 crc kubenswrapper[5058]: I1014 08:52:33.254019 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsjrg" event={"ID":"994e101b-9eb7-4b2c-a614-e9add02b276e","Type":"ContainerStarted","Data":"817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f"} Oct 14 08:52:33 crc kubenswrapper[5058]: I1014 08:52:33.281478 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dsjrg" podStartSLOduration=2.715758008 podStartE2EDuration="5.28145141s" podCreationTimestamp="2025-10-14 08:52:28 +0000 UTC" firstStartedPulling="2025-10-14 08:52:30.213691412 +0000 UTC m=+7498.124775258" lastFinishedPulling="2025-10-14 08:52:32.779384824 +0000 UTC m=+7500.690468660" observedRunningTime="2025-10-14 08:52:33.279949716 +0000 UTC m=+7501.191033562" watchObservedRunningTime="2025-10-14 08:52:33.28145141 +0000 UTC m=+7501.192535256" Oct 14 08:52:33 crc kubenswrapper[5058]: I1014 08:52:33.715317 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:33 crc kubenswrapper[5058]: I1014 08:52:33.715376 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:33 crc kubenswrapper[5058]: I1014 08:52:33.772223 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:34 crc kubenswrapper[5058]: I1014 08:52:34.341857 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:36 crc kubenswrapper[5058]: I1014 08:52:36.745665 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xh2l9"] Oct 14 08:52:37 crc kubenswrapper[5058]: I1014 08:52:37.296473 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xh2l9" podUID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerName="registry-server" containerID="cri-o://5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611" gracePeriod=2 Oct 14 08:52:37 crc kubenswrapper[5058]: I1014 08:52:37.767854 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:37 crc kubenswrapper[5058]: I1014 08:52:37.909041 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-catalog-content\") pod \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " Oct 14 08:52:37 crc kubenswrapper[5058]: I1014 08:52:37.909194 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-utilities\") pod \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " Oct 14 08:52:37 crc kubenswrapper[5058]: I1014 08:52:37.909323 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gghb7\" (UniqueName: \"kubernetes.io/projected/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-kube-api-access-gghb7\") pod \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\" (UID: \"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6\") " Oct 14 08:52:37 crc kubenswrapper[5058]: I1014 08:52:37.910063 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-utilities" (OuterVolumeSpecName: "utilities") pod "4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" (UID: "4b7187a4-2acf-4fe9-acbe-ec78243a3ea6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:52:37 crc kubenswrapper[5058]: I1014 08:52:37.912031 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:52:37 crc kubenswrapper[5058]: I1014 08:52:37.914717 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-kube-api-access-gghb7" (OuterVolumeSpecName: "kube-api-access-gghb7") pod "4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" (UID: "4b7187a4-2acf-4fe9-acbe-ec78243a3ea6"). InnerVolumeSpecName "kube-api-access-gghb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:52:37 crc kubenswrapper[5058]: I1014 08:52:37.972888 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" (UID: "4b7187a4-2acf-4fe9-acbe-ec78243a3ea6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.013254 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.013553 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gghb7\" (UniqueName: \"kubernetes.io/projected/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6-kube-api-access-gghb7\") on node \"crc\" DevicePath \"\"" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.315152 5058 generic.go:334] "Generic (PLEG): container finished" podID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerID="5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611" exitCode=0 Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.315223 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xh2l9" event={"ID":"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6","Type":"ContainerDied","Data":"5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611"} Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.315265 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xh2l9" event={"ID":"4b7187a4-2acf-4fe9-acbe-ec78243a3ea6","Type":"ContainerDied","Data":"d81461fec1cf8ab19274ea21119be720cb3e7085e9d343063ced38d4dc7811c8"} Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.315296 5058 scope.go:117] "RemoveContainer" containerID="5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.315525 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xh2l9" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.346229 5058 scope.go:117] "RemoveContainer" containerID="c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.377188 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xh2l9"] Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.386341 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xh2l9"] Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.433055 5058 scope.go:117] "RemoveContainer" containerID="d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.456278 5058 scope.go:117] "RemoveContainer" containerID="5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611" Oct 14 08:52:38 crc kubenswrapper[5058]: E1014 08:52:38.456770 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611\": container with ID starting with 5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611 not found: ID does not exist" containerID="5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.456839 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611"} err="failed to get container status \"5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611\": rpc error: code = NotFound desc = could not find container \"5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611\": container with ID starting with 5bd29263c208332ce459bbe7b03e78e227a0e5258ff85cde3b164978cf8ec611 not found: ID does not exist" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.456871 5058 scope.go:117] "RemoveContainer" containerID="c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060" Oct 14 08:52:38 crc kubenswrapper[5058]: E1014 08:52:38.457245 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060\": container with ID starting with c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060 not found: ID does not exist" containerID="c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.457281 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060"} err="failed to get container status \"c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060\": rpc error: code = NotFound desc = could not find container \"c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060\": container with ID starting with c6708bd951c3343789db14c3aee07e99b1bb098e5c695d84218ef19333144060 not found: ID does not exist" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.457305 5058 scope.go:117] "RemoveContainer" containerID="d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174" Oct 14 08:52:38 crc kubenswrapper[5058]: E1014 08:52:38.457590 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174\": container with ID starting with d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174 not found: ID does not exist" containerID="d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.457629 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174"} err="failed to get container status \"d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174\": rpc error: code = NotFound desc = could not find container \"d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174\": container with ID starting with d812cc4ca32f5eb9332b3d06ece8ed47c059d549615155d0525e159c20e3a174 not found: ID does not exist" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.682444 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.682536 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.757912 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:38 crc kubenswrapper[5058]: I1014 08:52:38.814915 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" path="/var/lib/kubelet/pods/4b7187a4-2acf-4fe9-acbe-ec78243a3ea6/volumes" Oct 14 08:52:39 crc kubenswrapper[5058]: I1014 08:52:39.411202 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:40 crc kubenswrapper[5058]: I1014 08:52:40.548563 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsjrg"] Oct 14 08:52:40 crc kubenswrapper[5058]: I1014 08:52:40.867707 5058 scope.go:117] "RemoveContainer" containerID="4f4253ec9724b81aa68d6805539e6565dd94f0b8a24efa7fedf7e102f3031c07" Oct 14 08:52:41 crc kubenswrapper[5058]: I1014 08:52:41.355147 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dsjrg" podUID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerName="registry-server" containerID="cri-o://817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f" gracePeriod=2 Oct 14 08:52:41 crc kubenswrapper[5058]: I1014 08:52:41.851541 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:41 crc kubenswrapper[5058]: I1014 08:52:41.989265 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-catalog-content\") pod \"994e101b-9eb7-4b2c-a614-e9add02b276e\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " Oct 14 08:52:41 crc kubenswrapper[5058]: I1014 08:52:41.989322 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-utilities\") pod \"994e101b-9eb7-4b2c-a614-e9add02b276e\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " Oct 14 08:52:41 crc kubenswrapper[5058]: I1014 08:52:41.989426 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l42lr\" (UniqueName: \"kubernetes.io/projected/994e101b-9eb7-4b2c-a614-e9add02b276e-kube-api-access-l42lr\") pod \"994e101b-9eb7-4b2c-a614-e9add02b276e\" (UID: \"994e101b-9eb7-4b2c-a614-e9add02b276e\") " Oct 14 08:52:41 crc kubenswrapper[5058]: I1014 08:52:41.990995 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-utilities" (OuterVolumeSpecName: "utilities") pod "994e101b-9eb7-4b2c-a614-e9add02b276e" (UID: "994e101b-9eb7-4b2c-a614-e9add02b276e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.000219 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994e101b-9eb7-4b2c-a614-e9add02b276e-kube-api-access-l42lr" (OuterVolumeSpecName: "kube-api-access-l42lr") pod "994e101b-9eb7-4b2c-a614-e9add02b276e" (UID: "994e101b-9eb7-4b2c-a614-e9add02b276e"). InnerVolumeSpecName "kube-api-access-l42lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.006107 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "994e101b-9eb7-4b2c-a614-e9add02b276e" (UID: "994e101b-9eb7-4b2c-a614-e9add02b276e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.091370 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.091485 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l42lr\" (UniqueName: \"kubernetes.io/projected/994e101b-9eb7-4b2c-a614-e9add02b276e-kube-api-access-l42lr\") on node \"crc\" DevicePath \"\"" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.091517 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994e101b-9eb7-4b2c-a614-e9add02b276e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.369467 5058 generic.go:334] "Generic (PLEG): container finished" podID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerID="817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f" exitCode=0 Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.369534 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsjrg" event={"ID":"994e101b-9eb7-4b2c-a614-e9add02b276e","Type":"ContainerDied","Data":"817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f"} Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.369864 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsjrg" event={"ID":"994e101b-9eb7-4b2c-a614-e9add02b276e","Type":"ContainerDied","Data":"7f5147cd116f935e632ea8c3ddcea539df7df5bf77f81d3ebf766d49c4f14a85"} Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.369903 5058 scope.go:117] "RemoveContainer" containerID="817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.369571 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsjrg" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.404724 5058 scope.go:117] "RemoveContainer" containerID="b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.427052 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsjrg"] Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.440185 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsjrg"] Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.464432 5058 scope.go:117] "RemoveContainer" containerID="046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.503436 5058 scope.go:117] "RemoveContainer" containerID="817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f" Oct 14 08:52:42 crc kubenswrapper[5058]: E1014 08:52:42.504630 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f\": container with ID starting with 817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f not found: ID does not exist" containerID="817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.504727 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f"} err="failed to get container status \"817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f\": rpc error: code = NotFound desc = could not find container \"817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f\": container with ID starting with 817f8681d21b8dc156f2cc1d698f82ca274b8b3fae2722225cc1599cc4eafe6f not found: ID does not exist" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.504784 5058 scope.go:117] "RemoveContainer" containerID="b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe" Oct 14 08:52:42 crc kubenswrapper[5058]: E1014 08:52:42.505314 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe\": container with ID starting with b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe not found: ID does not exist" containerID="b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.505380 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe"} err="failed to get container status \"b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe\": rpc error: code = NotFound desc = could not find container \"b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe\": container with ID starting with b3790e5450a8fc607f641bb5c3a97db15fb630b23003aa5dd99f6a6e9191fdfe not found: ID does not exist" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.505460 5058 scope.go:117] "RemoveContainer" containerID="046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a" Oct 14 08:52:42 crc kubenswrapper[5058]: E1014 08:52:42.505880 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a\": container with ID starting with 046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a not found: ID does not exist" containerID="046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.505927 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a"} err="failed to get container status \"046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a\": rpc error: code = NotFound desc = could not find container \"046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a\": container with ID starting with 046ecaf3147f7c6fd12bc633b1456adcbba8fdbec82b076dd0b76b3c50336e8a not found: ID does not exist" Oct 14 08:52:42 crc kubenswrapper[5058]: I1014 08:52:42.813613 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994e101b-9eb7-4b2c-a614-e9add02b276e" path="/var/lib/kubelet/pods/994e101b-9eb7-4b2c-a614-e9add02b276e/volumes" Oct 14 08:53:03 crc kubenswrapper[5058]: I1014 08:53:03.656500 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:53:03 crc kubenswrapper[5058]: I1014 08:53:03.657369 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:53:33 crc kubenswrapper[5058]: I1014 08:53:33.656203 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:53:33 crc kubenswrapper[5058]: I1014 08:53:33.657093 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:53:40 crc kubenswrapper[5058]: I1014 08:53:40.962483 5058 scope.go:117] "RemoveContainer" containerID="712343329c0c947466ba1abca909118c28943afe3ec45e5e02a1fca515e38be0" Oct 14 08:54:03 crc kubenswrapper[5058]: I1014 08:54:03.656085 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:54:03 crc kubenswrapper[5058]: I1014 08:54:03.656782 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:54:03 crc kubenswrapper[5058]: I1014 08:54:03.656901 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:54:03 crc kubenswrapper[5058]: I1014 08:54:03.657732 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1621207f350db180dc3c022749746475c52171ff861249e8bc80e83c5c095af6"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:54:03 crc kubenswrapper[5058]: I1014 08:54:03.657831 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://1621207f350db180dc3c022749746475c52171ff861249e8bc80e83c5c095af6" gracePeriod=600 Oct 14 08:54:04 crc kubenswrapper[5058]: I1014 08:54:04.290025 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="1621207f350db180dc3c022749746475c52171ff861249e8bc80e83c5c095af6" exitCode=0 Oct 14 08:54:04 crc kubenswrapper[5058]: I1014 08:54:04.290106 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"1621207f350db180dc3c022749746475c52171ff861249e8bc80e83c5c095af6"} Oct 14 08:54:04 crc kubenswrapper[5058]: I1014 08:54:04.290464 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1"} Oct 14 08:54:04 crc kubenswrapper[5058]: I1014 08:54:04.290503 5058 scope.go:117] "RemoveContainer" containerID="134eacf49271b940af0d223a3b233fce876a3c8fde9233c91632a13305b1b74e" Oct 14 08:54:41 crc kubenswrapper[5058]: I1014 08:54:41.046644 5058 scope.go:117] "RemoveContainer" containerID="d9cdac61f8e80785ea81abdb07828d13d775dc514928241bd1f5e76b0c7ee159" Oct 14 08:54:41 crc kubenswrapper[5058]: I1014 08:54:41.078524 5058 scope.go:117] "RemoveContainer" containerID="27d9fdf1a5f38ac95dc58783021917355575ea2bec4028fb9ca176133f47dc36" Oct 14 08:54:41 crc kubenswrapper[5058]: I1014 08:54:41.135747 5058 scope.go:117] "RemoveContainer" containerID="726ccc7e9e96be4b2eb7d995d10ba7143bac3d060f76940d7abea176638fdb1b" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.055466 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Oct 14 08:55:28 crc kubenswrapper[5058]: E1014 08:55:28.056885 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerName="registry-server" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.056905 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerName="registry-server" Oct 14 08:55:28 crc kubenswrapper[5058]: E1014 08:55:28.056916 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerName="extract-utilities" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.056923 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerName="extract-utilities" Oct 14 08:55:28 crc kubenswrapper[5058]: E1014 08:55:28.056938 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerName="extract-content" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.056944 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerName="extract-content" Oct 14 08:55:28 crc kubenswrapper[5058]: E1014 08:55:28.056954 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerName="registry-server" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.056960 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerName="registry-server" Oct 14 08:55:28 crc kubenswrapper[5058]: E1014 08:55:28.056970 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerName="extract-content" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.056976 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerName="extract-content" Oct 14 08:55:28 crc kubenswrapper[5058]: E1014 08:55:28.056988 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerName="extract-utilities" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.056994 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerName="extract-utilities" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.057133 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="994e101b-9eb7-4b2c-a614-e9add02b276e" containerName="registry-server" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.057157 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7187a4-2acf-4fe9-acbe-ec78243a3ea6" containerName="registry-server" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.057645 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.067102 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.107914 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vjsll" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.142229 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkzt\" (UniqueName: \"kubernetes.io/projected/d59cf81a-539e-4db7-9c0f-f5e075fea478-kube-api-access-rvkzt\") pod \"mariadb-copy-data\" (UID: \"d59cf81a-539e-4db7-9c0f-f5e075fea478\") " pod="openstack/mariadb-copy-data" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.142310 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\") pod \"mariadb-copy-data\" (UID: \"d59cf81a-539e-4db7-9c0f-f5e075fea478\") " pod="openstack/mariadb-copy-data" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.244742 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkzt\" (UniqueName: \"kubernetes.io/projected/d59cf81a-539e-4db7-9c0f-f5e075fea478-kube-api-access-rvkzt\") pod \"mariadb-copy-data\" (UID: \"d59cf81a-539e-4db7-9c0f-f5e075fea478\") " pod="openstack/mariadb-copy-data" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.244932 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\") pod \"mariadb-copy-data\" (UID: \"d59cf81a-539e-4db7-9c0f-f5e075fea478\") " pod="openstack/mariadb-copy-data" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.250514 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.250564 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\") pod \"mariadb-copy-data\" (UID: \"d59cf81a-539e-4db7-9c0f-f5e075fea478\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b59ad06b955bd481bfeefe43d95000901b8e74680722e091d69eae2b131d5fc/globalmount\"" pod="openstack/mariadb-copy-data" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.267564 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkzt\" (UniqueName: \"kubernetes.io/projected/d59cf81a-539e-4db7-9c0f-f5e075fea478-kube-api-access-rvkzt\") pod \"mariadb-copy-data\" (UID: \"d59cf81a-539e-4db7-9c0f-f5e075fea478\") " pod="openstack/mariadb-copy-data" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.295959 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\") pod \"mariadb-copy-data\" (UID: \"d59cf81a-539e-4db7-9c0f-f5e075fea478\") " pod="openstack/mariadb-copy-data" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.435998 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 14 08:55:28 crc kubenswrapper[5058]: I1014 08:55:28.815438 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Oct 14 08:55:29 crc kubenswrapper[5058]: I1014 08:55:29.166627 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d59cf81a-539e-4db7-9c0f-f5e075fea478","Type":"ContainerStarted","Data":"a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1"} Oct 14 08:55:29 crc kubenswrapper[5058]: I1014 08:55:29.166701 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d59cf81a-539e-4db7-9c0f-f5e075fea478","Type":"ContainerStarted","Data":"071d5bf949f8bd6db65487946d2df257aa11be30695b047b4107089abc582c71"} Oct 14 08:55:29 crc kubenswrapper[5058]: I1014 08:55:29.185109 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.185088359 podStartE2EDuration="2.185088359s" podCreationTimestamp="2025-10-14 08:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:55:29.180601858 +0000 UTC m=+7677.091685664" watchObservedRunningTime="2025-10-14 08:55:29.185088359 +0000 UTC m=+7677.096172175" Oct 14 08:55:32 crc kubenswrapper[5058]: I1014 08:55:32.502983 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:32 crc kubenswrapper[5058]: I1014 08:55:32.505408 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:32 crc kubenswrapper[5058]: I1014 08:55:32.514987 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:32 crc kubenswrapper[5058]: I1014 08:55:32.630779 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd6bv\" (UniqueName: \"kubernetes.io/projected/ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb-kube-api-access-gd6bv\") pod \"mariadb-client\" (UID: \"ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb\") " pod="openstack/mariadb-client" Oct 14 08:55:32 crc kubenswrapper[5058]: I1014 08:55:32.733008 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd6bv\" (UniqueName: \"kubernetes.io/projected/ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb-kube-api-access-gd6bv\") pod \"mariadb-client\" (UID: \"ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb\") " pod="openstack/mariadb-client" Oct 14 08:55:32 crc kubenswrapper[5058]: I1014 08:55:32.768783 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd6bv\" (UniqueName: \"kubernetes.io/projected/ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb-kube-api-access-gd6bv\") pod \"mariadb-client\" (UID: \"ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb\") " pod="openstack/mariadb-client" Oct 14 08:55:32 crc kubenswrapper[5058]: I1014 08:55:32.832184 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:33 crc kubenswrapper[5058]: I1014 08:55:33.177675 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:33 crc kubenswrapper[5058]: I1014 08:55:33.209188 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb","Type":"ContainerStarted","Data":"7e220718246d849fa2de585fe81c0faa6ff50d168f46b25598f2045f96eba090"} Oct 14 08:55:34 crc kubenswrapper[5058]: I1014 08:55:34.219888 5058 generic.go:334] "Generic (PLEG): container finished" podID="ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb" containerID="311466384c17f1fe3cbe5c22fe939191c35a4c3b551aafb073956c113ee9c1e0" exitCode=0 Oct 14 08:55:34 crc kubenswrapper[5058]: I1014 08:55:34.219982 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb","Type":"ContainerDied","Data":"311466384c17f1fe3cbe5c22fe939191c35a4c3b551aafb073956c113ee9c1e0"} Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.598910 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.618638 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb/mariadb-client/0.log" Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.654916 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.670068 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.698515 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd6bv\" (UniqueName: \"kubernetes.io/projected/ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb-kube-api-access-gd6bv\") pod \"ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb\" (UID: \"ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb\") " Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.706139 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb-kube-api-access-gd6bv" (OuterVolumeSpecName: "kube-api-access-gd6bv") pod "ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb" (UID: "ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb"). InnerVolumeSpecName "kube-api-access-gd6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.800258 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd6bv\" (UniqueName: \"kubernetes.io/projected/ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb-kube-api-access-gd6bv\") on node \"crc\" DevicePath \"\"" Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.829679 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:35 crc kubenswrapper[5058]: E1014 08:55:35.830285 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb" containerName="mariadb-client" Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.830304 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb" containerName="mariadb-client" Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.830478 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb" containerName="mariadb-client" Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.831078 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:35 crc kubenswrapper[5058]: I1014 08:55:35.852114 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:36 crc kubenswrapper[5058]: I1014 08:55:36.004822 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59qwb\" (UniqueName: \"kubernetes.io/projected/69cefc1a-ae94-4c4d-b21d-e23369850ad4-kube-api-access-59qwb\") pod \"mariadb-client\" (UID: \"69cefc1a-ae94-4c4d-b21d-e23369850ad4\") " pod="openstack/mariadb-client" Oct 14 08:55:36 crc kubenswrapper[5058]: I1014 08:55:36.107275 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59qwb\" (UniqueName: \"kubernetes.io/projected/69cefc1a-ae94-4c4d-b21d-e23369850ad4-kube-api-access-59qwb\") pod \"mariadb-client\" (UID: \"69cefc1a-ae94-4c4d-b21d-e23369850ad4\") " pod="openstack/mariadb-client" Oct 14 08:55:36 crc kubenswrapper[5058]: I1014 08:55:36.127052 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59qwb\" (UniqueName: \"kubernetes.io/projected/69cefc1a-ae94-4c4d-b21d-e23369850ad4-kube-api-access-59qwb\") pod \"mariadb-client\" (UID: \"69cefc1a-ae94-4c4d-b21d-e23369850ad4\") " pod="openstack/mariadb-client" Oct 14 08:55:36 crc kubenswrapper[5058]: I1014 08:55:36.149939 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:36 crc kubenswrapper[5058]: I1014 08:55:36.247623 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e220718246d849fa2de585fe81c0faa6ff50d168f46b25598f2045f96eba090" Oct 14 08:55:36 crc kubenswrapper[5058]: I1014 08:55:36.247721 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:36 crc kubenswrapper[5058]: I1014 08:55:36.272778 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb" podUID="69cefc1a-ae94-4c4d-b21d-e23369850ad4" Oct 14 08:55:36 crc kubenswrapper[5058]: I1014 08:55:36.423522 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:36 crc kubenswrapper[5058]: W1014 08:55:36.432034 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69cefc1a_ae94_4c4d_b21d_e23369850ad4.slice/crio-ac93ca8f147ec1e005bd023489dbe57d63c5b89e47eaa6a710aa3d523f0bd200 WatchSource:0}: Error finding container ac93ca8f147ec1e005bd023489dbe57d63c5b89e47eaa6a710aa3d523f0bd200: Status 404 returned error can't find the container with id ac93ca8f147ec1e005bd023489dbe57d63c5b89e47eaa6a710aa3d523f0bd200 Oct 14 08:55:36 crc kubenswrapper[5058]: I1014 08:55:36.813760 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb" path="/var/lib/kubelet/pods/ea5a0b9c-9aca-44ac-81e5-3c1aed9933eb/volumes" Oct 14 08:55:37 crc kubenswrapper[5058]: I1014 08:55:37.257748 5058 generic.go:334] "Generic (PLEG): container finished" podID="69cefc1a-ae94-4c4d-b21d-e23369850ad4" containerID="5526cad5d1108a258f54189167d4720c30c7714f3a59c9692034ca30d9c137d8" exitCode=0 Oct 14 08:55:37 crc kubenswrapper[5058]: I1014 08:55:37.257842 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"69cefc1a-ae94-4c4d-b21d-e23369850ad4","Type":"ContainerDied","Data":"5526cad5d1108a258f54189167d4720c30c7714f3a59c9692034ca30d9c137d8"} Oct 14 08:55:37 crc kubenswrapper[5058]: I1014 08:55:37.257880 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"69cefc1a-ae94-4c4d-b21d-e23369850ad4","Type":"ContainerStarted","Data":"ac93ca8f147ec1e005bd023489dbe57d63c5b89e47eaa6a710aa3d523f0bd200"} Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.673869 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.695156 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_69cefc1a-ae94-4c4d-b21d-e23369850ad4/mariadb-client/0.log" Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.726606 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.739408 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.765242 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59qwb\" (UniqueName: \"kubernetes.io/projected/69cefc1a-ae94-4c4d-b21d-e23369850ad4-kube-api-access-59qwb\") pod \"69cefc1a-ae94-4c4d-b21d-e23369850ad4\" (UID: \"69cefc1a-ae94-4c4d-b21d-e23369850ad4\") " Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.780966 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69cefc1a-ae94-4c4d-b21d-e23369850ad4-kube-api-access-59qwb" (OuterVolumeSpecName: "kube-api-access-59qwb") pod "69cefc1a-ae94-4c4d-b21d-e23369850ad4" (UID: "69cefc1a-ae94-4c4d-b21d-e23369850ad4"). InnerVolumeSpecName "kube-api-access-59qwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.800260 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69cefc1a-ae94-4c4d-b21d-e23369850ad4" path="/var/lib/kubelet/pods/69cefc1a-ae94-4c4d-b21d-e23369850ad4/volumes" Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.867058 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59qwb\" (UniqueName: \"kubernetes.io/projected/69cefc1a-ae94-4c4d-b21d-e23369850ad4-kube-api-access-59qwb\") on node \"crc\" DevicePath \"\"" Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.920624 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:38 crc kubenswrapper[5058]: E1014 08:55:38.921016 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69cefc1a-ae94-4c4d-b21d-e23369850ad4" containerName="mariadb-client" Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.921032 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="69cefc1a-ae94-4c4d-b21d-e23369850ad4" containerName="mariadb-client" Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.921189 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="69cefc1a-ae94-4c4d-b21d-e23369850ad4" containerName="mariadb-client" Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.921739 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:38 crc kubenswrapper[5058]: I1014 08:55:38.931820 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:39 crc kubenswrapper[5058]: I1014 08:55:39.070091 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvnms\" (UniqueName: \"kubernetes.io/projected/7632db76-c492-48a8-8b02-a9216f82450e-kube-api-access-wvnms\") pod \"mariadb-client\" (UID: \"7632db76-c492-48a8-8b02-a9216f82450e\") " pod="openstack/mariadb-client" Oct 14 08:55:39 crc kubenswrapper[5058]: I1014 08:55:39.172166 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvnms\" (UniqueName: \"kubernetes.io/projected/7632db76-c492-48a8-8b02-a9216f82450e-kube-api-access-wvnms\") pod \"mariadb-client\" (UID: \"7632db76-c492-48a8-8b02-a9216f82450e\") " pod="openstack/mariadb-client" Oct 14 08:55:39 crc kubenswrapper[5058]: I1014 08:55:39.204271 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvnms\" (UniqueName: \"kubernetes.io/projected/7632db76-c492-48a8-8b02-a9216f82450e-kube-api-access-wvnms\") pod \"mariadb-client\" (UID: \"7632db76-c492-48a8-8b02-a9216f82450e\") " pod="openstack/mariadb-client" Oct 14 08:55:39 crc kubenswrapper[5058]: I1014 08:55:39.251502 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:39 crc kubenswrapper[5058]: I1014 08:55:39.293470 5058 scope.go:117] "RemoveContainer" containerID="5526cad5d1108a258f54189167d4720c30c7714f3a59c9692034ca30d9c137d8" Oct 14 08:55:39 crc kubenswrapper[5058]: I1014 08:55:39.293687 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:39 crc kubenswrapper[5058]: I1014 08:55:39.734205 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:39 crc kubenswrapper[5058]: W1014 08:55:39.746756 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7632db76_c492_48a8_8b02_a9216f82450e.slice/crio-c40692bd8a3760a2fdd3a225f8c730e09a17fffb301c05de2906b3c4fefaac16 WatchSource:0}: Error finding container c40692bd8a3760a2fdd3a225f8c730e09a17fffb301c05de2906b3c4fefaac16: Status 404 returned error can't find the container with id c40692bd8a3760a2fdd3a225f8c730e09a17fffb301c05de2906b3c4fefaac16 Oct 14 08:55:40 crc kubenswrapper[5058]: I1014 08:55:40.308532 5058 generic.go:334] "Generic (PLEG): container finished" podID="7632db76-c492-48a8-8b02-a9216f82450e" containerID="aa0391d826a2faa27c84406ffbe849c1a00fea4a6333aa483dc2af8b806759f8" exitCode=0 Oct 14 08:55:40 crc kubenswrapper[5058]: I1014 08:55:40.308668 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"7632db76-c492-48a8-8b02-a9216f82450e","Type":"ContainerDied","Data":"aa0391d826a2faa27c84406ffbe849c1a00fea4a6333aa483dc2af8b806759f8"} Oct 14 08:55:40 crc kubenswrapper[5058]: I1014 08:55:40.308865 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"7632db76-c492-48a8-8b02-a9216f82450e","Type":"ContainerStarted","Data":"c40692bd8a3760a2fdd3a225f8c730e09a17fffb301c05de2906b3c4fefaac16"} Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.236025 5058 scope.go:117] "RemoveContainer" containerID="fc3be3385e20fd10f7db93a81c7cfd6981c94ac1e472e9fe5e123c77cb59aa3d" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.270061 5058 scope.go:117] "RemoveContainer" containerID="627182e8dd3290dffe4479d2725192a766ba4950578c3060ff7868f526ef61d5" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.324633 5058 scope.go:117] "RemoveContainer" containerID="77a78a32632108da5f42d375cb52a450c2f6e1b4d888e60a71698117d290869b" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.362193 5058 scope.go:117] "RemoveContainer" containerID="11d3c89b934c491e2ad167f10bda427747a8d08096cdee2f1767a2df1831f862" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.391613 5058 scope.go:117] "RemoveContainer" containerID="5e24d176c4d038f05fdd2437e8c8d179745499cdd65d3115520fc2004e37d610" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.425365 5058 scope.go:117] "RemoveContainer" containerID="79988468a053f7615ddbefe7b4fa99c11c776fb27848af1ca8ecb9d0b54b7e90" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.471028 5058 scope.go:117] "RemoveContainer" containerID="d979c4df3e3b4bfd49a5bef0b05d5d5c0daf7bfce2930f8f8e3b1a40e63b89e8" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.497396 5058 scope.go:117] "RemoveContainer" containerID="235d8ff4a37087823faa80ef886ae9406532282fb8465efa6c327c945632ba5a" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.521480 5058 scope.go:117] "RemoveContainer" containerID="5be4b49a7de26b11a62dd57bfb17e7b030709cd494a9bff50d8c76a1dc793235" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.573399 5058 scope.go:117] "RemoveContainer" containerID="6209a2805194e7f00eb5e8bd78e9194796982d393de338bdd2a37787303d15da" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.605139 5058 scope.go:117] "RemoveContainer" containerID="e807dfafdf16164ef3d846156e6aff041a40d3f897e29a194fda2b9d5609be48" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.636713 5058 scope.go:117] "RemoveContainer" containerID="6e91de6cfed16e865826e1134abe3c449220735678e0a673b6509fcf57c2fbbb" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.682171 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.710230 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_7632db76-c492-48a8-8b02-a9216f82450e/mariadb-client/0.log" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.741978 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.754246 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.824964 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvnms\" (UniqueName: \"kubernetes.io/projected/7632db76-c492-48a8-8b02-a9216f82450e-kube-api-access-wvnms\") pod \"7632db76-c492-48a8-8b02-a9216f82450e\" (UID: \"7632db76-c492-48a8-8b02-a9216f82450e\") " Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.832570 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7632db76-c492-48a8-8b02-a9216f82450e-kube-api-access-wvnms" (OuterVolumeSpecName: "kube-api-access-wvnms") pod "7632db76-c492-48a8-8b02-a9216f82450e" (UID: "7632db76-c492-48a8-8b02-a9216f82450e"). InnerVolumeSpecName "kube-api-access-wvnms". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.906324 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:41 crc kubenswrapper[5058]: E1014 08:55:41.906694 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7632db76-c492-48a8-8b02-a9216f82450e" containerName="mariadb-client" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.906713 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7632db76-c492-48a8-8b02-a9216f82450e" containerName="mariadb-client" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.906941 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7632db76-c492-48a8-8b02-a9216f82450e" containerName="mariadb-client" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.907564 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.915744 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:41 crc kubenswrapper[5058]: I1014 08:55:41.939894 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvnms\" (UniqueName: \"kubernetes.io/projected/7632db76-c492-48a8-8b02-a9216f82450e-kube-api-access-wvnms\") on node \"crc\" DevicePath \"\"" Oct 14 08:55:42 crc kubenswrapper[5058]: I1014 08:55:42.041467 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ttdb\" (UniqueName: \"kubernetes.io/projected/edc138ff-389a-4b9e-8a8b-da0a552a4038-kube-api-access-5ttdb\") pod \"mariadb-client\" (UID: \"edc138ff-389a-4b9e-8a8b-da0a552a4038\") " pod="openstack/mariadb-client" Oct 14 08:55:42 crc kubenswrapper[5058]: I1014 08:55:42.144057 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ttdb\" (UniqueName: \"kubernetes.io/projected/edc138ff-389a-4b9e-8a8b-da0a552a4038-kube-api-access-5ttdb\") pod \"mariadb-client\" (UID: \"edc138ff-389a-4b9e-8a8b-da0a552a4038\") " pod="openstack/mariadb-client" Oct 14 08:55:42 crc kubenswrapper[5058]: I1014 08:55:42.177659 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ttdb\" (UniqueName: \"kubernetes.io/projected/edc138ff-389a-4b9e-8a8b-da0a552a4038-kube-api-access-5ttdb\") pod \"mariadb-client\" (UID: \"edc138ff-389a-4b9e-8a8b-da0a552a4038\") " pod="openstack/mariadb-client" Oct 14 08:55:42 crc kubenswrapper[5058]: I1014 08:55:42.263421 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:42 crc kubenswrapper[5058]: I1014 08:55:42.346837 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c40692bd8a3760a2fdd3a225f8c730e09a17fffb301c05de2906b3c4fefaac16" Oct 14 08:55:42 crc kubenswrapper[5058]: I1014 08:55:42.347003 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:42 crc kubenswrapper[5058]: I1014 08:55:42.386025 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="7632db76-c492-48a8-8b02-a9216f82450e" podUID="edc138ff-389a-4b9e-8a8b-da0a552a4038" Oct 14 08:55:42 crc kubenswrapper[5058]: E1014 08:55:42.590041 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7632db76_c492_48a8_8b02_a9216f82450e.slice\": RecentStats: unable to find data in memory cache]" Oct 14 08:55:42 crc kubenswrapper[5058]: I1014 08:55:42.788834 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:42 crc kubenswrapper[5058]: I1014 08:55:42.801883 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7632db76-c492-48a8-8b02-a9216f82450e" path="/var/lib/kubelet/pods/7632db76-c492-48a8-8b02-a9216f82450e/volumes" Oct 14 08:55:43 crc kubenswrapper[5058]: I1014 08:55:43.364052 5058 generic.go:334] "Generic (PLEG): container finished" podID="edc138ff-389a-4b9e-8a8b-da0a552a4038" containerID="b25a4b0eaddbadefd46098ceb76c34825a703ebec450a39f4a7cbeed766bfdc7" exitCode=0 Oct 14 08:55:43 crc kubenswrapper[5058]: I1014 08:55:43.364125 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"edc138ff-389a-4b9e-8a8b-da0a552a4038","Type":"ContainerDied","Data":"b25a4b0eaddbadefd46098ceb76c34825a703ebec450a39f4a7cbeed766bfdc7"} Oct 14 08:55:43 crc kubenswrapper[5058]: I1014 08:55:43.364199 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"edc138ff-389a-4b9e-8a8b-da0a552a4038","Type":"ContainerStarted","Data":"7d0296ab67e5b1ee131b5149dc1e55590265153f00faca7bf0dace9b735b0047"} Oct 14 08:55:44 crc kubenswrapper[5058]: I1014 08:55:44.822394 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:44 crc kubenswrapper[5058]: I1014 08:55:44.845391 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_edc138ff-389a-4b9e-8a8b-da0a552a4038/mariadb-client/0.log" Oct 14 08:55:44 crc kubenswrapper[5058]: I1014 08:55:44.882698 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:44 crc kubenswrapper[5058]: I1014 08:55:44.894193 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 14 08:55:45 crc kubenswrapper[5058]: I1014 08:55:45.016384 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ttdb\" (UniqueName: \"kubernetes.io/projected/edc138ff-389a-4b9e-8a8b-da0a552a4038-kube-api-access-5ttdb\") pod \"edc138ff-389a-4b9e-8a8b-da0a552a4038\" (UID: \"edc138ff-389a-4b9e-8a8b-da0a552a4038\") " Oct 14 08:55:45 crc kubenswrapper[5058]: I1014 08:55:45.031180 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc138ff-389a-4b9e-8a8b-da0a552a4038-kube-api-access-5ttdb" (OuterVolumeSpecName: "kube-api-access-5ttdb") pod "edc138ff-389a-4b9e-8a8b-da0a552a4038" (UID: "edc138ff-389a-4b9e-8a8b-da0a552a4038"). InnerVolumeSpecName "kube-api-access-5ttdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:55:45 crc kubenswrapper[5058]: I1014 08:55:45.119176 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ttdb\" (UniqueName: \"kubernetes.io/projected/edc138ff-389a-4b9e-8a8b-da0a552a4038-kube-api-access-5ttdb\") on node \"crc\" DevicePath \"\"" Oct 14 08:55:45 crc kubenswrapper[5058]: I1014 08:55:45.397983 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0296ab67e5b1ee131b5149dc1e55590265153f00faca7bf0dace9b735b0047" Oct 14 08:55:45 crc kubenswrapper[5058]: I1014 08:55:45.398039 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 14 08:55:46 crc kubenswrapper[5058]: I1014 08:55:46.807912 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc138ff-389a-4b9e-8a8b-da0a552a4038" path="/var/lib/kubelet/pods/edc138ff-389a-4b9e-8a8b-da0a552a4038/volumes" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.130661 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 08:56:26 crc kubenswrapper[5058]: E1014 08:56:26.131780 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc138ff-389a-4b9e-8a8b-da0a552a4038" containerName="mariadb-client" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.131819 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc138ff-389a-4b9e-8a8b-da0a552a4038" containerName="mariadb-client" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.132021 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc138ff-389a-4b9e-8a8b-da0a552a4038" containerName="mariadb-client" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.132986 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.135963 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.139481 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-rxnhj" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.143524 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.172227 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.181856 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.183748 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.191267 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.193147 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.212257 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.219923 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.254929 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fff994c4-4220-4da1-8c5d-dbaf65e2b117-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.254974 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/fff994c4-4220-4da1-8c5d-dbaf65e2b117-kube-api-access-z462c\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.255001 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff994c4-4220-4da1-8c5d-dbaf65e2b117-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.255044 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74642069-cab8-412d-8faf-120da6f8e4a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74642069-cab8-412d-8faf-120da6f8e4a8\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.255068 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff994c4-4220-4da1-8c5d-dbaf65e2b117-config\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.255285 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fff994c4-4220-4da1-8c5d-dbaf65e2b117-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.321475 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.323453 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.325673 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-g5zd6" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.326421 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.327458 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.357289 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fff994c4-4220-4da1-8c5d-dbaf65e2b117-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.357449 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf95d4f8-e496-480a-8201-48da4b99973b-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.357510 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cad86fa8-af67-447f-8b0a-36dad0e995fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cad86fa8-af67-447f-8b0a-36dad0e995fd\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.357563 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107587cc-f1ce-4990-9d6b-6140322ba805-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.357644 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf95d4f8-e496-480a-8201-48da4b99973b-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.357739 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fff994c4-4220-4da1-8c5d-dbaf65e2b117-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.357869 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/fff994c4-4220-4da1-8c5d-dbaf65e2b117-kube-api-access-z462c\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.357954 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff994c4-4220-4da1-8c5d-dbaf65e2b117-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.358020 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bf95d4f8-e496-480a-8201-48da4b99973b-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.358084 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107587cc-f1ce-4990-9d6b-6140322ba805-config\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.358163 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bf4600a2-5d1e-416b-8f1d-a9d2ebb13129\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf4600a2-5d1e-416b-8f1d-a9d2ebb13129\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.358222 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q76sp\" (UniqueName: \"kubernetes.io/projected/bf95d4f8-e496-480a-8201-48da4b99973b-kube-api-access-q76sp\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.358325 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-74642069-cab8-412d-8faf-120da6f8e4a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74642069-cab8-412d-8faf-120da6f8e4a8\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.358374 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff994c4-4220-4da1-8c5d-dbaf65e2b117-config\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.358414 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/107587cc-f1ce-4990-9d6b-6140322ba805-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.359353 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf95d4f8-e496-480a-8201-48da4b99973b-config\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.359429 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/107587cc-f1ce-4990-9d6b-6140322ba805-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.359485 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6xsz\" (UniqueName: \"kubernetes.io/projected/107587cc-f1ce-4990-9d6b-6140322ba805-kube-api-access-k6xsz\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.360112 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fff994c4-4220-4da1-8c5d-dbaf65e2b117-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.360938 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fff994c4-4220-4da1-8c5d-dbaf65e2b117-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.361835 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff994c4-4220-4da1-8c5d-dbaf65e2b117-config\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.362662 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.362688 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-74642069-cab8-412d-8faf-120da6f8e4a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74642069-cab8-412d-8faf-120da6f8e4a8\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/736c1c949ba70713e5c69b0827e0b17ccf39a2adca4587abd8d61c0afa099303/globalmount\"" pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.368368 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.370537 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff994c4-4220-4da1-8c5d-dbaf65e2b117-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.379959 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/fff994c4-4220-4da1-8c5d-dbaf65e2b117-kube-api-access-z462c\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.380033 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.381883 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.393885 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.398329 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.402773 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.410074 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.427258 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-74642069-cab8-412d-8faf-120da6f8e4a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74642069-cab8-412d-8faf-120da6f8e4a8\") pod \"ovsdbserver-nb-0\" (UID: \"fff994c4-4220-4da1-8c5d-dbaf65e2b117\") " pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460563 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c95e379-00e4-475f-9bcb-841be048de5a-config\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460609 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047c2989-508f-4847-afa8-67c09f45bc92-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460639 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-897mh\" (UniqueName: \"kubernetes.io/projected/047c2989-508f-4847-afa8-67c09f45bc92-kube-api-access-897mh\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460668 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf95d4f8-e496-480a-8201-48da4b99973b-config\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460716 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/107587cc-f1ce-4990-9d6b-6140322ba805-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460744 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6xsz\" (UniqueName: \"kubernetes.io/projected/107587cc-f1ce-4990-9d6b-6140322ba805-kube-api-access-k6xsz\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460778 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-33591464-59e6-445c-b699-22b82a1b8e54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33591464-59e6-445c-b699-22b82a1b8e54\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460817 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/047c2989-508f-4847-afa8-67c09f45bc92-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460855 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047c2989-508f-4847-afa8-67c09f45bc92-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460877 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c95e379-00e4-475f-9bcb-841be048de5a-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460908 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-917738dd-0329-464c-8352-bfd88507b045\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-917738dd-0329-464c-8352-bfd88507b045\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460939 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf95d4f8-e496-480a-8201-48da4b99973b-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460959 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c95e379-00e4-475f-9bcb-841be048de5a-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.460982 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cad86fa8-af67-447f-8b0a-36dad0e995fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cad86fa8-af67-447f-8b0a-36dad0e995fd\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.461005 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107587cc-f1ce-4990-9d6b-6140322ba805-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.461043 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a475911-a9eb-4305-9abc-6f5cedb592b6-config\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.461071 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1fe1996b-584e-482e-abd2-190dcbf18649\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fe1996b-584e-482e-abd2-190dcbf18649\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.461098 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf95d4f8-e496-480a-8201-48da4b99973b-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.463385 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/107587cc-f1ce-4990-9d6b-6140322ba805-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.463658 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf95d4f8-e496-480a-8201-48da4b99973b-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.463273 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c95e379-00e4-475f-9bcb-841be048de5a-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.463736 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv78c\" (UniqueName: \"kubernetes.io/projected/2a475911-a9eb-4305-9abc-6f5cedb592b6-kube-api-access-zv78c\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465144 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047c2989-508f-4847-afa8-67c09f45bc92-config\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465199 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a475911-a9eb-4305-9abc-6f5cedb592b6-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465238 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bf95d4f8-e496-480a-8201-48da4b99973b-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465270 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107587cc-f1ce-4990-9d6b-6140322ba805-config\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465302 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a475911-a9eb-4305-9abc-6f5cedb592b6-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465320 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf95d4f8-e496-480a-8201-48da4b99973b-config\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465327 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bf4600a2-5d1e-416b-8f1d-a9d2ebb13129\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf4600a2-5d1e-416b-8f1d-a9d2ebb13129\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465432 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q76sp\" (UniqueName: \"kubernetes.io/projected/bf95d4f8-e496-480a-8201-48da4b99973b-kube-api-access-q76sp\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465568 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a475911-a9eb-4305-9abc-6f5cedb592b6-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465628 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/107587cc-f1ce-4990-9d6b-6140322ba805-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.465687 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtkg7\" (UniqueName: \"kubernetes.io/projected/1c95e379-00e4-475f-9bcb-841be048de5a-kube-api-access-rtkg7\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.467417 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107587cc-f1ce-4990-9d6b-6140322ba805-config\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.467711 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf95d4f8-e496-480a-8201-48da4b99973b-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.467752 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.467791 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cad86fa8-af67-447f-8b0a-36dad0e995fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cad86fa8-af67-447f-8b0a-36dad0e995fd\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e948f6b981237ca0e6e86e8dccc4ffbac9b7f842ed9fe49c36a28e71dd2e8648/globalmount\"" pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.467917 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bf95d4f8-e496-480a-8201-48da4b99973b-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.469991 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/107587cc-f1ce-4990-9d6b-6140322ba805-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.470230 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.470270 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bf4600a2-5d1e-416b-8f1d-a9d2ebb13129\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf4600a2-5d1e-416b-8f1d-a9d2ebb13129\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c785ad23a4e95a1935a33967408c2940e9f302422eb1310225220c5e6bd3be91/globalmount\"" pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.476539 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.476635 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107587cc-f1ce-4990-9d6b-6140322ba805-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.486961 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q76sp\" (UniqueName: \"kubernetes.io/projected/bf95d4f8-e496-480a-8201-48da4b99973b-kube-api-access-q76sp\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.488025 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6xsz\" (UniqueName: \"kubernetes.io/projected/107587cc-f1ce-4990-9d6b-6140322ba805-kube-api-access-k6xsz\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.509065 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cad86fa8-af67-447f-8b0a-36dad0e995fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cad86fa8-af67-447f-8b0a-36dad0e995fd\") pod \"ovsdbserver-nb-2\" (UID: \"107587cc-f1ce-4990-9d6b-6140322ba805\") " pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.521716 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.526304 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bf4600a2-5d1e-416b-8f1d-a9d2ebb13129\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bf4600a2-5d1e-416b-8f1d-a9d2ebb13129\") pod \"ovsdbserver-nb-1\" (UID: \"bf95d4f8-e496-480a-8201-48da4b99973b\") " pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.567876 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a475911-a9eb-4305-9abc-6f5cedb592b6-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.568168 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a475911-a9eb-4305-9abc-6f5cedb592b6-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.568304 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtkg7\" (UniqueName: \"kubernetes.io/projected/1c95e379-00e4-475f-9bcb-841be048de5a-kube-api-access-rtkg7\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.568413 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c95e379-00e4-475f-9bcb-841be048de5a-config\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.568507 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047c2989-508f-4847-afa8-67c09f45bc92-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.568615 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-897mh\" (UniqueName: \"kubernetes.io/projected/047c2989-508f-4847-afa8-67c09f45bc92-kube-api-access-897mh\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.568740 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-33591464-59e6-445c-b699-22b82a1b8e54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33591464-59e6-445c-b699-22b82a1b8e54\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.568875 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/047c2989-508f-4847-afa8-67c09f45bc92-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.568980 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047c2989-508f-4847-afa8-67c09f45bc92-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569074 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c95e379-00e4-475f-9bcb-841be048de5a-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569171 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-917738dd-0329-464c-8352-bfd88507b045\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-917738dd-0329-464c-8352-bfd88507b045\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569278 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c95e379-00e4-475f-9bcb-841be048de5a-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569391 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a475911-a9eb-4305-9abc-6f5cedb592b6-config\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569493 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1fe1996b-584e-482e-abd2-190dcbf18649\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fe1996b-584e-482e-abd2-190dcbf18649\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569659 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c95e379-00e4-475f-9bcb-841be048de5a-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569761 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv78c\" (UniqueName: \"kubernetes.io/projected/2a475911-a9eb-4305-9abc-6f5cedb592b6-kube-api-access-zv78c\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569879 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047c2989-508f-4847-afa8-67c09f45bc92-config\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569960 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/047c2989-508f-4847-afa8-67c09f45bc92-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.570066 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a475911-a9eb-4305-9abc-6f5cedb592b6-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.570339 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c95e379-00e4-475f-9bcb-841be048de5a-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.570624 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c95e379-00e4-475f-9bcb-841be048de5a-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.569446 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/047c2989-508f-4847-afa8-67c09f45bc92-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.571446 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a475911-a9eb-4305-9abc-6f5cedb592b6-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.571691 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047c2989-508f-4847-afa8-67c09f45bc92-config\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.572241 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.572279 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1fe1996b-584e-482e-abd2-190dcbf18649\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fe1996b-584e-482e-abd2-190dcbf18649\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c4a7986982645c742ddc57a4178a3c61caac305198ab38b256ec61f72f43fcb3/globalmount\"" pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.573174 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.573218 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-917738dd-0329-464c-8352-bfd88507b045\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-917738dd-0329-464c-8352-bfd88507b045\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aaec41bd9637fd970e9a3cda5e3e505f33d4887121df324a4b249775233b6aa3/globalmount\"" pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.574401 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c95e379-00e4-475f-9bcb-841be048de5a-config\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.583011 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047c2989-508f-4847-afa8-67c09f45bc92-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.583360 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c95e379-00e4-475f-9bcb-841be048de5a-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.583537 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a475911-a9eb-4305-9abc-6f5cedb592b6-config\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.584028 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a475911-a9eb-4305-9abc-6f5cedb592b6-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.584063 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a475911-a9eb-4305-9abc-6f5cedb592b6-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.585703 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv78c\" (UniqueName: \"kubernetes.io/projected/2a475911-a9eb-4305-9abc-6f5cedb592b6-kube-api-access-zv78c\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.589959 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.590002 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-33591464-59e6-445c-b699-22b82a1b8e54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33591464-59e6-445c-b699-22b82a1b8e54\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ddf97cc0bfb9e9d7859be8ffea42a391bf69b43cb876284a44d9b739b30ad28c/globalmount\"" pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.591702 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-897mh\" (UniqueName: \"kubernetes.io/projected/047c2989-508f-4847-afa8-67c09f45bc92-kube-api-access-897mh\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.612021 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtkg7\" (UniqueName: \"kubernetes.io/projected/1c95e379-00e4-475f-9bcb-841be048de5a-kube-api-access-rtkg7\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.651507 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-917738dd-0329-464c-8352-bfd88507b045\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-917738dd-0329-464c-8352-bfd88507b045\") pod \"ovsdbserver-sb-0\" (UID: \"047c2989-508f-4847-afa8-67c09f45bc92\") " pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.663633 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1fe1996b-584e-482e-abd2-190dcbf18649\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fe1996b-584e-482e-abd2-190dcbf18649\") pod \"ovsdbserver-sb-1\" (UID: \"1c95e379-00e4-475f-9bcb-841be048de5a\") " pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.672501 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-33591464-59e6-445c-b699-22b82a1b8e54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33591464-59e6-445c-b699-22b82a1b8e54\") pod \"ovsdbserver-sb-2\" (UID: \"2a475911-a9eb-4305-9abc-6f5cedb592b6\") " pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.808515 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.883313 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.891732 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:26 crc kubenswrapper[5058]: I1014 08:56:26.944771 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.092655 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.159940 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 14 08:56:27 crc kubenswrapper[5058]: W1014 08:56:27.165208 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod107587cc_f1ce_4990_9d6b_6140322ba805.slice/crio-84a6852abc46b79e5ce7ce7f027652445f04fc0544664f76506262d7c6abb640 WatchSource:0}: Error finding container 84a6852abc46b79e5ce7ce7f027652445f04fc0544664f76506262d7c6abb640: Status 404 returned error can't find the container with id 84a6852abc46b79e5ce7ce7f027652445f04fc0544664f76506262d7c6abb640 Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.242470 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.435003 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.533310 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 14 08:56:27 crc kubenswrapper[5058]: W1014 08:56:27.537636 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c95e379_00e4_475f_9bcb_841be048de5a.slice/crio-dd7e004ff5a4b73b04a80a7ca48457bbb4ff10ed7bad9a03fd2ce6c7a31d075d WatchSource:0}: Error finding container dd7e004ff5a4b73b04a80a7ca48457bbb4ff10ed7bad9a03fd2ce6c7a31d075d: Status 404 returned error can't find the container with id dd7e004ff5a4b73b04a80a7ca48457bbb4ff10ed7bad9a03fd2ce6c7a31d075d Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.827198 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"047c2989-508f-4847-afa8-67c09f45bc92","Type":"ContainerStarted","Data":"7ee56a16eb70e5257f90113356b3f0a5d925d09738ac8eccbd0f51478e18ffd7"} Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.831494 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"1c95e379-00e4-475f-9bcb-841be048de5a","Type":"ContainerStarted","Data":"dd7e004ff5a4b73b04a80a7ca48457bbb4ff10ed7bad9a03fd2ce6c7a31d075d"} Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.833250 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"bf95d4f8-e496-480a-8201-48da4b99973b","Type":"ContainerStarted","Data":"0b649211efd8d6307c073342aa660cd95f0494ab3ff7f5d2523125827e521e9d"} Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.835277 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"107587cc-f1ce-4990-9d6b-6140322ba805","Type":"ContainerStarted","Data":"84a6852abc46b79e5ce7ce7f027652445f04fc0544664f76506262d7c6abb640"} Oct 14 08:56:27 crc kubenswrapper[5058]: I1014 08:56:27.836687 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fff994c4-4220-4da1-8c5d-dbaf65e2b117","Type":"ContainerStarted","Data":"8968611e502ebd4602c0cf3eadb310b5b452f3edbb2c8196e0eb46236778e28b"} Oct 14 08:56:28 crc kubenswrapper[5058]: I1014 08:56:28.205874 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 14 08:56:28 crc kubenswrapper[5058]: W1014 08:56:28.207882 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a475911_a9eb_4305_9abc_6f5cedb592b6.slice/crio-4da3f4ea926f52399bb3572477bc200c8b62fd4553969d4b83de098e77ee46ab WatchSource:0}: Error finding container 4da3f4ea926f52399bb3572477bc200c8b62fd4553969d4b83de098e77ee46ab: Status 404 returned error can't find the container with id 4da3f4ea926f52399bb3572477bc200c8b62fd4553969d4b83de098e77ee46ab Oct 14 08:56:28 crc kubenswrapper[5058]: I1014 08:56:28.846117 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"2a475911-a9eb-4305-9abc-6f5cedb592b6","Type":"ContainerStarted","Data":"4da3f4ea926f52399bb3572477bc200c8b62fd4553969d4b83de098e77ee46ab"} Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.870545 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"bf95d4f8-e496-480a-8201-48da4b99973b","Type":"ContainerStarted","Data":"2058d13f8cea7e86649a252b4f527153807af50c28937da270e6b7c09af87625"} Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.871299 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"bf95d4f8-e496-480a-8201-48da4b99973b","Type":"ContainerStarted","Data":"695849f43b8da734f370cdb873dcf28e7cb986009c1fef8693d2f9c5f509e2f1"} Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.874657 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"107587cc-f1ce-4990-9d6b-6140322ba805","Type":"ContainerStarted","Data":"c38f19aceade7d4574cbc53e135717b0c02cee5e0157e00943db313ff4e7f301"} Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.874682 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"107587cc-f1ce-4990-9d6b-6140322ba805","Type":"ContainerStarted","Data":"618e79c09a505d5cb64cd027e608604c537f2c8f8446e2d9a530497000138bc8"} Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.877040 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"047c2989-508f-4847-afa8-67c09f45bc92","Type":"ContainerStarted","Data":"bbce72aa7912d7ff73fa3f29a7596334eca2e8043009597d01bf225073230ff4"} Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.877068 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"047c2989-508f-4847-afa8-67c09f45bc92","Type":"ContainerStarted","Data":"577cfe45c52bbf29beb8ef800f56f5173249d142cd18527ce05e9fc2eac33a17"} Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.919150 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.098450109 podStartE2EDuration="6.919132771s" podCreationTimestamp="2025-10-14 08:56:25 +0000 UTC" firstStartedPulling="2025-10-14 08:56:27.454622911 +0000 UTC m=+7735.365706717" lastFinishedPulling="2025-10-14 08:56:31.275305553 +0000 UTC m=+7739.186389379" observedRunningTime="2025-10-14 08:56:31.913244879 +0000 UTC m=+7739.824328685" watchObservedRunningTime="2025-10-14 08:56:31.919132771 +0000 UTC m=+7739.830216577" Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.919831 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=2.900049029 podStartE2EDuration="6.919824932s" podCreationTimestamp="2025-10-14 08:56:25 +0000 UTC" firstStartedPulling="2025-10-14 08:56:27.250319817 +0000 UTC m=+7735.161403623" lastFinishedPulling="2025-10-14 08:56:31.27009572 +0000 UTC m=+7739.181179526" observedRunningTime="2025-10-14 08:56:31.89552733 +0000 UTC m=+7739.806611136" watchObservedRunningTime="2025-10-14 08:56:31.919824932 +0000 UTC m=+7739.830908728" Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.934826 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=2.823016402 podStartE2EDuration="6.9347873s" podCreationTimestamp="2025-10-14 08:56:25 +0000 UTC" firstStartedPulling="2025-10-14 08:56:27.175827805 +0000 UTC m=+7735.086911621" lastFinishedPulling="2025-10-14 08:56:31.287598713 +0000 UTC m=+7739.198682519" observedRunningTime="2025-10-14 08:56:31.933868123 +0000 UTC m=+7739.844951949" watchObservedRunningTime="2025-10-14 08:56:31.9347873 +0000 UTC m=+7739.845871106" Oct 14 08:56:31 crc kubenswrapper[5058]: I1014 08:56:31.945751 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:32 crc kubenswrapper[5058]: I1014 08:56:32.522836 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:32 crc kubenswrapper[5058]: I1014 08:56:32.809477 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:32 crc kubenswrapper[5058]: I1014 08:56:32.889346 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fff994c4-4220-4da1-8c5d-dbaf65e2b117","Type":"ContainerStarted","Data":"5cf39c1d5928edeca9f713b7f07b4cac3d7d22dc6f99904669fd6876db3c2de8"} Oct 14 08:56:32 crc kubenswrapper[5058]: I1014 08:56:32.889720 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fff994c4-4220-4da1-8c5d-dbaf65e2b117","Type":"ContainerStarted","Data":"ebee21447fd10a3d1818ecc917605e2f7ee1b0df254388591fa331f450575d57"} Oct 14 08:56:32 crc kubenswrapper[5058]: I1014 08:56:32.913216 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.898672538 podStartE2EDuration="7.913197529s" podCreationTimestamp="2025-10-14 08:56:25 +0000 UTC" firstStartedPulling="2025-10-14 08:56:27.113710955 +0000 UTC m=+7735.024794761" lastFinishedPulling="2025-10-14 08:56:32.128235946 +0000 UTC m=+7740.039319752" observedRunningTime="2025-10-14 08:56:32.909902342 +0000 UTC m=+7740.820986158" watchObservedRunningTime="2025-10-14 08:56:32.913197529 +0000 UTC m=+7740.824281335" Oct 14 08:56:32 crc kubenswrapper[5058]: I1014 08:56:32.945836 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:33 crc kubenswrapper[5058]: I1014 08:56:33.655527 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:56:33 crc kubenswrapper[5058]: I1014 08:56:33.655883 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:56:33 crc kubenswrapper[5058]: I1014 08:56:33.903236 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"2a475911-a9eb-4305-9abc-6f5cedb592b6","Type":"ContainerStarted","Data":"075457e66044139db74436a914e1cc8178c30d3a601e4e8b7c4bd1989c9384a9"} Oct 14 08:56:33 crc kubenswrapper[5058]: I1014 08:56:33.903281 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"2a475911-a9eb-4305-9abc-6f5cedb592b6","Type":"ContainerStarted","Data":"b869790ac3993d5c95296adb08a7f90c1b3fabda2ec42e0eccc143fca7c43e78"} Oct 14 08:56:33 crc kubenswrapper[5058]: I1014 08:56:33.908370 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"1c95e379-00e4-475f-9bcb-841be048de5a","Type":"ContainerStarted","Data":"5b945b7659a078f14e40f5cd45f3d9a4b41699dd38a6b1f69f804453d29f8218"} Oct 14 08:56:33 crc kubenswrapper[5058]: I1014 08:56:33.908586 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"1c95e379-00e4-475f-9bcb-841be048de5a","Type":"ContainerStarted","Data":"5a3c15060983df5d42bf8eb0129305d178d18523526cd1e0d5875cd14f6b5754"} Oct 14 08:56:33 crc kubenswrapper[5058]: I1014 08:56:33.935408 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.918787108 podStartE2EDuration="8.935377919s" podCreationTimestamp="2025-10-14 08:56:25 +0000 UTC" firstStartedPulling="2025-10-14 08:56:28.20975046 +0000 UTC m=+7736.120834266" lastFinishedPulling="2025-10-14 08:56:33.226341271 +0000 UTC m=+7741.137425077" observedRunningTime="2025-10-14 08:56:33.932029061 +0000 UTC m=+7741.843112957" watchObservedRunningTime="2025-10-14 08:56:33.935377919 +0000 UTC m=+7741.846461805" Oct 14 08:56:33 crc kubenswrapper[5058]: I1014 08:56:33.956483 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.673239976 podStartE2EDuration="8.956466027s" podCreationTimestamp="2025-10-14 08:56:25 +0000 UTC" firstStartedPulling="2025-10-14 08:56:27.539479387 +0000 UTC m=+7735.450563193" lastFinishedPulling="2025-10-14 08:56:32.822705428 +0000 UTC m=+7740.733789244" observedRunningTime="2025-10-14 08:56:33.951378208 +0000 UTC m=+7741.862462084" watchObservedRunningTime="2025-10-14 08:56:33.956466027 +0000 UTC m=+7741.867549833" Oct 14 08:56:35 crc kubenswrapper[5058]: I1014 08:56:35.478017 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:35 crc kubenswrapper[5058]: I1014 08:56:35.562194 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:35 crc kubenswrapper[5058]: I1014 08:56:35.601681 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:35 crc kubenswrapper[5058]: I1014 08:56:35.602195 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:35 crc kubenswrapper[5058]: I1014 08:56:35.884249 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:35 crc kubenswrapper[5058]: I1014 08:56:35.886209 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:35 crc kubenswrapper[5058]: I1014 08:56:35.887366 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:35 crc kubenswrapper[5058]: I1014 08:56:35.892046 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:35 crc kubenswrapper[5058]: I1014 08:56:35.928924 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.019852 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.089504 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.385842 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-659bd6f88c-gpcv2"] Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.387932 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.392299 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.396825 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-659bd6f88c-gpcv2"] Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.494847 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-config\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.494947 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4r8z\" (UniqueName: \"kubernetes.io/projected/aad546a3-5c92-481c-87f0-d37de431989a-kube-api-access-p4r8z\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.495038 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-ovsdbserver-sb\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.495189 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-dns-svc\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.556511 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.596340 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4r8z\" (UniqueName: \"kubernetes.io/projected/aad546a3-5c92-481c-87f0-d37de431989a-kube-api-access-p4r8z\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.596415 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-ovsdbserver-sb\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.596475 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-dns-svc\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.596519 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-config\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.597434 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-dns-svc\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.597440 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-config\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.598295 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-ovsdbserver-sb\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.623773 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4r8z\" (UniqueName: \"kubernetes.io/projected/aad546a3-5c92-481c-87f0-d37de431989a-kube-api-access-p4r8z\") pod \"dnsmasq-dns-659bd6f88c-gpcv2\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.709569 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.868554 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-659bd6f88c-gpcv2"] Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.875878 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.879166 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dcffdbb49-8m8gf"] Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.880460 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.892714 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.893366 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.893401 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 14 08:56:36 crc kubenswrapper[5058]: I1014 08:56:36.903057 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dcffdbb49-8m8gf"] Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.002916 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-nb\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.003116 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-dns-svc\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.003193 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-sb\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.003257 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf7n4\" (UniqueName: \"kubernetes.io/projected/3cd1d393-3acf-4fca-9337-467f82477386-kube-api-access-nf7n4\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.003283 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-config\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.107821 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf7n4\" (UniqueName: \"kubernetes.io/projected/3cd1d393-3acf-4fca-9337-467f82477386-kube-api-access-nf7n4\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.107870 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-config\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.107910 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-nb\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.107991 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-dns-svc\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.108022 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-sb\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.109028 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-nb\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.109087 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-dns-svc\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.109028 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-config\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.109258 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-sb\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.125987 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf7n4\" (UniqueName: \"kubernetes.io/projected/3cd1d393-3acf-4fca-9337-467f82477386-kube-api-access-nf7n4\") pod \"dnsmasq-dns-6dcffdbb49-8m8gf\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.233016 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.242939 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-659bd6f88c-gpcv2"] Oct 14 08:56:37 crc kubenswrapper[5058]: W1014 08:56:37.494167 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cd1d393_3acf_4fca_9337_467f82477386.slice/crio-808ebd92de06d621bcd82f2a69aec533eabaf8b69110db091fca7351589f2541 WatchSource:0}: Error finding container 808ebd92de06d621bcd82f2a69aec533eabaf8b69110db091fca7351589f2541: Status 404 returned error can't find the container with id 808ebd92de06d621bcd82f2a69aec533eabaf8b69110db091fca7351589f2541 Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.495971 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dcffdbb49-8m8gf"] Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.948698 5058 generic.go:334] "Generic (PLEG): container finished" podID="aad546a3-5c92-481c-87f0-d37de431989a" containerID="fa01623be31f30e5d7a34dedfee64b4d1503904b8e0b392f71f30416f72f911c" exitCode=0 Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.948767 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" event={"ID":"aad546a3-5c92-481c-87f0-d37de431989a","Type":"ContainerDied","Data":"fa01623be31f30e5d7a34dedfee64b4d1503904b8e0b392f71f30416f72f911c"} Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.949178 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" event={"ID":"aad546a3-5c92-481c-87f0-d37de431989a","Type":"ContainerStarted","Data":"994eb04cc5e75a674bedd54180c093c3c0d20d529dbc9cf342110dece9b92e45"} Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.959131 5058 generic.go:334] "Generic (PLEG): container finished" podID="3cd1d393-3acf-4fca-9337-467f82477386" containerID="77c5bd35027c641ba133d4895b893d8b41462efdc8b9bf9ae2595815f0d365a0" exitCode=0 Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.959185 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" event={"ID":"3cd1d393-3acf-4fca-9337-467f82477386","Type":"ContainerDied","Data":"77c5bd35027c641ba133d4895b893d8b41462efdc8b9bf9ae2595815f0d365a0"} Oct 14 08:56:37 crc kubenswrapper[5058]: I1014 08:56:37.959232 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" event={"ID":"3cd1d393-3acf-4fca-9337-467f82477386","Type":"ContainerStarted","Data":"808ebd92de06d621bcd82f2a69aec533eabaf8b69110db091fca7351589f2541"} Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.311290 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.432666 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4r8z\" (UniqueName: \"kubernetes.io/projected/aad546a3-5c92-481c-87f0-d37de431989a-kube-api-access-p4r8z\") pod \"aad546a3-5c92-481c-87f0-d37de431989a\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.432810 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-ovsdbserver-sb\") pod \"aad546a3-5c92-481c-87f0-d37de431989a\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.432860 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-config\") pod \"aad546a3-5c92-481c-87f0-d37de431989a\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.432890 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-dns-svc\") pod \"aad546a3-5c92-481c-87f0-d37de431989a\" (UID: \"aad546a3-5c92-481c-87f0-d37de431989a\") " Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.438945 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad546a3-5c92-481c-87f0-d37de431989a-kube-api-access-p4r8z" (OuterVolumeSpecName: "kube-api-access-p4r8z") pod "aad546a3-5c92-481c-87f0-d37de431989a" (UID: "aad546a3-5c92-481c-87f0-d37de431989a"). InnerVolumeSpecName "kube-api-access-p4r8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.457109 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aad546a3-5c92-481c-87f0-d37de431989a" (UID: "aad546a3-5c92-481c-87f0-d37de431989a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.468296 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aad546a3-5c92-481c-87f0-d37de431989a" (UID: "aad546a3-5c92-481c-87f0-d37de431989a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.469223 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-config" (OuterVolumeSpecName: "config") pod "aad546a3-5c92-481c-87f0-d37de431989a" (UID: "aad546a3-5c92-481c-87f0-d37de431989a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.535204 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4r8z\" (UniqueName: \"kubernetes.io/projected/aad546a3-5c92-481c-87f0-d37de431989a-kube-api-access-p4r8z\") on node \"crc\" DevicePath \"\"" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.535237 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.535249 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-config\") on node \"crc\" DevicePath \"\"" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.535259 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aad546a3-5c92-481c-87f0-d37de431989a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.945840 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.951533 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.973450 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" event={"ID":"aad546a3-5c92-481c-87f0-d37de431989a","Type":"ContainerDied","Data":"994eb04cc5e75a674bedd54180c093c3c0d20d529dbc9cf342110dece9b92e45"} Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.973526 5058 scope.go:117] "RemoveContainer" containerID="fa01623be31f30e5d7a34dedfee64b4d1503904b8e0b392f71f30416f72f911c" Oct 14 08:56:38 crc kubenswrapper[5058]: I1014 08:56:38.973681 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-659bd6f88c-gpcv2" Oct 14 08:56:39 crc kubenswrapper[5058]: I1014 08:56:39.015340 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" event={"ID":"3cd1d393-3acf-4fca-9337-467f82477386","Type":"ContainerStarted","Data":"6aeba1b43e5a5d2e7d8b0384a4d51ffda4f906c29251e649976518c999e290eb"} Oct 14 08:56:39 crc kubenswrapper[5058]: I1014 08:56:39.015530 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:39 crc kubenswrapper[5058]: I1014 08:56:39.037419 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Oct 14 08:56:39 crc kubenswrapper[5058]: I1014 08:56:39.039558 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Oct 14 08:56:39 crc kubenswrapper[5058]: I1014 08:56:39.068830 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-659bd6f88c-gpcv2"] Oct 14 08:56:39 crc kubenswrapper[5058]: I1014 08:56:39.076752 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-659bd6f88c-gpcv2"] Oct 14 08:56:39 crc kubenswrapper[5058]: I1014 08:56:39.085199 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" podStartSLOduration=3.085174682 podStartE2EDuration="3.085174682s" podCreationTimestamp="2025-10-14 08:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:56:39.060352865 +0000 UTC m=+7746.971436671" watchObservedRunningTime="2025-10-14 08:56:39.085174682 +0000 UTC m=+7746.996258488" Oct 14 08:56:40 crc kubenswrapper[5058]: I1014 08:56:40.800819 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad546a3-5c92-481c-87f0-d37de431989a" path="/var/lib/kubelet/pods/aad546a3-5c92-481c-87f0-d37de431989a/volumes" Oct 14 08:56:41 crc kubenswrapper[5058]: I1014 08:56:41.557882 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 14 08:56:41 crc kubenswrapper[5058]: I1014 08:56:41.953477 5058 scope.go:117] "RemoveContainer" containerID="54f59e1c3e40a3d5721244663ed9af819c00d73bd318b0ce1d45a74ef2dd29bf" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.817039 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Oct 14 08:56:44 crc kubenswrapper[5058]: E1014 08:56:44.819263 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad546a3-5c92-481c-87f0-d37de431989a" containerName="init" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.819422 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad546a3-5c92-481c-87f0-d37de431989a" containerName="init" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.822406 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad546a3-5c92-481c-87f0-d37de431989a" containerName="init" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.823659 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.826419 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.828671 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.883393 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp9r5\" (UniqueName: \"kubernetes.io/projected/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-kube-api-access-lp9r5\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " pod="openstack/ovn-copy-data" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.884561 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " pod="openstack/ovn-copy-data" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.884604 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " pod="openstack/ovn-copy-data" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.986493 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " pod="openstack/ovn-copy-data" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.986888 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " pod="openstack/ovn-copy-data" Oct 14 08:56:44 crc kubenswrapper[5058]: I1014 08:56:44.987075 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp9r5\" (UniqueName: \"kubernetes.io/projected/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-kube-api-access-lp9r5\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " pod="openstack/ovn-copy-data" Oct 14 08:56:45 crc kubenswrapper[5058]: I1014 08:56:45.002417 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 08:56:45 crc kubenswrapper[5058]: I1014 08:56:45.002459 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6cbcfcc429fa3eab1a9402c84b136580cc76fb74c8e2e2018a6205ecf4ea37f7/globalmount\"" pod="openstack/ovn-copy-data" Oct 14 08:56:45 crc kubenswrapper[5058]: I1014 08:56:45.002721 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " pod="openstack/ovn-copy-data" Oct 14 08:56:45 crc kubenswrapper[5058]: I1014 08:56:45.007537 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp9r5\" (UniqueName: \"kubernetes.io/projected/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-kube-api-access-lp9r5\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " pod="openstack/ovn-copy-data" Oct 14 08:56:45 crc kubenswrapper[5058]: I1014 08:56:45.036839 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\") pod \"ovn-copy-data\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " pod="openstack/ovn-copy-data" Oct 14 08:56:45 crc kubenswrapper[5058]: I1014 08:56:45.159089 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 14 08:56:45 crc kubenswrapper[5058]: I1014 08:56:45.742472 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Oct 14 08:56:45 crc kubenswrapper[5058]: W1014 08:56:45.745788 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c026dd0_fcb2_4488_bb1a_39b3b72690c8.slice/crio-3ba2a2fca25948da7ccfa93d2af8150a1ddae12bb2bac695216516eeb230d1ca WatchSource:0}: Error finding container 3ba2a2fca25948da7ccfa93d2af8150a1ddae12bb2bac695216516eeb230d1ca: Status 404 returned error can't find the container with id 3ba2a2fca25948da7ccfa93d2af8150a1ddae12bb2bac695216516eeb230d1ca Oct 14 08:56:45 crc kubenswrapper[5058]: I1014 08:56:45.749867 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 08:56:46 crc kubenswrapper[5058]: I1014 08:56:46.086608 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"1c026dd0-fcb2-4488-bb1a-39b3b72690c8","Type":"ContainerStarted","Data":"3ba2a2fca25948da7ccfa93d2af8150a1ddae12bb2bac695216516eeb230d1ca"} Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.101001 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"1c026dd0-fcb2-4488-bb1a-39b3b72690c8","Type":"ContainerStarted","Data":"7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f"} Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.124089 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.902821709 podStartE2EDuration="4.1240644s" podCreationTimestamp="2025-10-14 08:56:43 +0000 UTC" firstStartedPulling="2025-10-14 08:56:45.749487586 +0000 UTC m=+7753.660571412" lastFinishedPulling="2025-10-14 08:56:45.970730257 +0000 UTC m=+7753.881814103" observedRunningTime="2025-10-14 08:56:47.119256989 +0000 UTC m=+7755.030340835" watchObservedRunningTime="2025-10-14 08:56:47.1240644 +0000 UTC m=+7755.035148226" Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.235160 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.336010 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6867678c-cgbxh"] Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.336303 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" podUID="c4bf9877-0fae-4b0c-b651-8570384d186c" containerName="dnsmasq-dns" containerID="cri-o://547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe" gracePeriod=10 Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.829811 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.942694 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gh7r\" (UniqueName: \"kubernetes.io/projected/c4bf9877-0fae-4b0c-b651-8570384d186c-kube-api-access-2gh7r\") pod \"c4bf9877-0fae-4b0c-b651-8570384d186c\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.942920 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-dns-svc\") pod \"c4bf9877-0fae-4b0c-b651-8570384d186c\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.942958 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-config\") pod \"c4bf9877-0fae-4b0c-b651-8570384d186c\" (UID: \"c4bf9877-0fae-4b0c-b651-8570384d186c\") " Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.949360 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4bf9877-0fae-4b0c-b651-8570384d186c-kube-api-access-2gh7r" (OuterVolumeSpecName: "kube-api-access-2gh7r") pod "c4bf9877-0fae-4b0c-b651-8570384d186c" (UID: "c4bf9877-0fae-4b0c-b651-8570384d186c"). InnerVolumeSpecName "kube-api-access-2gh7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:56:47 crc kubenswrapper[5058]: I1014 08:56:47.999676 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4bf9877-0fae-4b0c-b651-8570384d186c" (UID: "c4bf9877-0fae-4b0c-b651-8570384d186c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.002959 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-config" (OuterVolumeSpecName: "config") pod "c4bf9877-0fae-4b0c-b651-8570384d186c" (UID: "c4bf9877-0fae-4b0c-b651-8570384d186c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.045301 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gh7r\" (UniqueName: \"kubernetes.io/projected/c4bf9877-0fae-4b0c-b651-8570384d186c-kube-api-access-2gh7r\") on node \"crc\" DevicePath \"\"" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.045334 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.045343 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4bf9877-0fae-4b0c-b651-8570384d186c-config\") on node \"crc\" DevicePath \"\"" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.112179 5058 generic.go:334] "Generic (PLEG): container finished" podID="c4bf9877-0fae-4b0c-b651-8570384d186c" containerID="547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe" exitCode=0 Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.112231 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" event={"ID":"c4bf9877-0fae-4b0c-b651-8570384d186c","Type":"ContainerDied","Data":"547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe"} Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.112277 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" event={"ID":"c4bf9877-0fae-4b0c-b651-8570384d186c","Type":"ContainerDied","Data":"e4b08f809a9a13f2ab5a5d4a7d030ed29c4b7d37ba3b66f58bafda5114abc35e"} Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.112278 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6867678c-cgbxh" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.112297 5058 scope.go:117] "RemoveContainer" containerID="547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.148015 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6867678c-cgbxh"] Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.153094 5058 scope.go:117] "RemoveContainer" containerID="03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.156354 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6867678c-cgbxh"] Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.179391 5058 scope.go:117] "RemoveContainer" containerID="547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe" Oct 14 08:56:48 crc kubenswrapper[5058]: E1014 08:56:48.179863 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe\": container with ID starting with 547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe not found: ID does not exist" containerID="547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.179892 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe"} err="failed to get container status \"547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe\": rpc error: code = NotFound desc = could not find container \"547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe\": container with ID starting with 547b1d9f501d20cbb04b7115c6e9323c28f7a04e056cc981b90e3fa47cfaf2fe not found: ID does not exist" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.179911 5058 scope.go:117] "RemoveContainer" containerID="03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce" Oct 14 08:56:48 crc kubenswrapper[5058]: E1014 08:56:48.180328 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce\": container with ID starting with 03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce not found: ID does not exist" containerID="03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.180381 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce"} err="failed to get container status \"03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce\": rpc error: code = NotFound desc = could not find container \"03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce\": container with ID starting with 03695984ccbb8463fe46050d39a0f73f686400d245172e262fe37d49ea24dbce not found: ID does not exist" Oct 14 08:56:48 crc kubenswrapper[5058]: I1014 08:56:48.807322 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4bf9877-0fae-4b0c-b651-8570384d186c" path="/var/lib/kubelet/pods/c4bf9877-0fae-4b0c-b651-8570384d186c/volumes" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.281290 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 14 08:56:55 crc kubenswrapper[5058]: E1014 08:56:55.282386 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bf9877-0fae-4b0c-b651-8570384d186c" containerName="init" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.282408 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bf9877-0fae-4b0c-b651-8570384d186c" containerName="init" Oct 14 08:56:55 crc kubenswrapper[5058]: E1014 08:56:55.282429 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bf9877-0fae-4b0c-b651-8570384d186c" containerName="dnsmasq-dns" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.282442 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bf9877-0fae-4b0c-b651-8570384d186c" containerName="dnsmasq-dns" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.282767 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4bf9877-0fae-4b0c-b651-8570384d186c" containerName="dnsmasq-dns" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.289653 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.292999 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ng7w6" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.294229 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.296008 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.315899 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.404639 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d1b5b14-b87d-4e85-a84c-0e632336be97-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.404714 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d1b5b14-b87d-4e85-a84c-0e632336be97-scripts\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.404744 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1b5b14-b87d-4e85-a84c-0e632336be97-config\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.404784 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d1b5b14-b87d-4e85-a84c-0e632336be97-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.404872 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7rg\" (UniqueName: \"kubernetes.io/projected/1d1b5b14-b87d-4e85-a84c-0e632336be97-kube-api-access-wq7rg\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.506883 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d1b5b14-b87d-4e85-a84c-0e632336be97-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.506978 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d1b5b14-b87d-4e85-a84c-0e632336be97-scripts\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.507046 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1b5b14-b87d-4e85-a84c-0e632336be97-config\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.507139 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d1b5b14-b87d-4e85-a84c-0e632336be97-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.507239 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7rg\" (UniqueName: \"kubernetes.io/projected/1d1b5b14-b87d-4e85-a84c-0e632336be97-kube-api-access-wq7rg\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.507923 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d1b5b14-b87d-4e85-a84c-0e632336be97-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.508175 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1b5b14-b87d-4e85-a84c-0e632336be97-config\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.509453 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d1b5b14-b87d-4e85-a84c-0e632336be97-scripts\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.518439 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d1b5b14-b87d-4e85-a84c-0e632336be97-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.536289 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7rg\" (UniqueName: \"kubernetes.io/projected/1d1b5b14-b87d-4e85-a84c-0e632336be97-kube-api-access-wq7rg\") pod \"ovn-northd-0\" (UID: \"1d1b5b14-b87d-4e85-a84c-0e632336be97\") " pod="openstack/ovn-northd-0" Oct 14 08:56:55 crc kubenswrapper[5058]: I1014 08:56:55.624736 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 08:56:56 crc kubenswrapper[5058]: I1014 08:56:56.078539 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 08:56:56 crc kubenswrapper[5058]: I1014 08:56:56.195476 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1d1b5b14-b87d-4e85-a84c-0e632336be97","Type":"ContainerStarted","Data":"e455752e805a5e8a9365be3498c9c082de4467fb23247b528116bbc5c8ae190c"} Oct 14 08:56:57 crc kubenswrapper[5058]: I1014 08:56:57.207169 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1d1b5b14-b87d-4e85-a84c-0e632336be97","Type":"ContainerStarted","Data":"de3de55fb0efcb9dc26e24eaa81d7e59c2cd59028a895833486c306f6a2b15ec"} Oct 14 08:56:58 crc kubenswrapper[5058]: I1014 08:56:58.218357 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1d1b5b14-b87d-4e85-a84c-0e632336be97","Type":"ContainerStarted","Data":"dea5695c1a8064b5b3bc9b6b4aa6065f311a6eb7fad28f60ab3b544869cbc86a"} Oct 14 08:56:58 crc kubenswrapper[5058]: I1014 08:56:58.218557 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 14 08:56:58 crc kubenswrapper[5058]: I1014 08:56:58.247460 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.520963355 podStartE2EDuration="3.247433224s" podCreationTimestamp="2025-10-14 08:56:55 +0000 UTC" firstStartedPulling="2025-10-14 08:56:56.079940206 +0000 UTC m=+7763.991024052" lastFinishedPulling="2025-10-14 08:56:56.806410115 +0000 UTC m=+7764.717493921" observedRunningTime="2025-10-14 08:56:58.233653921 +0000 UTC m=+7766.144737767" watchObservedRunningTime="2025-10-14 08:56:58.247433224 +0000 UTC m=+7766.158517070" Oct 14 08:57:01 crc kubenswrapper[5058]: I1014 08:57:01.767939 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fbsbw"] Oct 14 08:57:01 crc kubenswrapper[5058]: I1014 08:57:01.769966 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fbsbw" Oct 14 08:57:01 crc kubenswrapper[5058]: I1014 08:57:01.778080 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fbsbw"] Oct 14 08:57:01 crc kubenswrapper[5058]: I1014 08:57:01.830441 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhz7h\" (UniqueName: \"kubernetes.io/projected/b791336a-d0a2-4a32-823d-6eb0dbc56438-kube-api-access-jhz7h\") pod \"keystone-db-create-fbsbw\" (UID: \"b791336a-d0a2-4a32-823d-6eb0dbc56438\") " pod="openstack/keystone-db-create-fbsbw" Oct 14 08:57:01 crc kubenswrapper[5058]: I1014 08:57:01.933028 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhz7h\" (UniqueName: \"kubernetes.io/projected/b791336a-d0a2-4a32-823d-6eb0dbc56438-kube-api-access-jhz7h\") pod \"keystone-db-create-fbsbw\" (UID: \"b791336a-d0a2-4a32-823d-6eb0dbc56438\") " pod="openstack/keystone-db-create-fbsbw" Oct 14 08:57:01 crc kubenswrapper[5058]: I1014 08:57:01.955778 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhz7h\" (UniqueName: \"kubernetes.io/projected/b791336a-d0a2-4a32-823d-6eb0dbc56438-kube-api-access-jhz7h\") pod \"keystone-db-create-fbsbw\" (UID: \"b791336a-d0a2-4a32-823d-6eb0dbc56438\") " pod="openstack/keystone-db-create-fbsbw" Oct 14 08:57:02 crc kubenswrapper[5058]: I1014 08:57:02.102084 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fbsbw" Oct 14 08:57:02 crc kubenswrapper[5058]: I1014 08:57:02.562282 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fbsbw"] Oct 14 08:57:03 crc kubenswrapper[5058]: I1014 08:57:03.269574 5058 generic.go:334] "Generic (PLEG): container finished" podID="b791336a-d0a2-4a32-823d-6eb0dbc56438" containerID="f523f938c96cf5d2dd5233b357f3adf8b207dec646add18faffbaaddf523a4fa" exitCode=0 Oct 14 08:57:03 crc kubenswrapper[5058]: I1014 08:57:03.269632 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fbsbw" event={"ID":"b791336a-d0a2-4a32-823d-6eb0dbc56438","Type":"ContainerDied","Data":"f523f938c96cf5d2dd5233b357f3adf8b207dec646add18faffbaaddf523a4fa"} Oct 14 08:57:03 crc kubenswrapper[5058]: I1014 08:57:03.269663 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fbsbw" event={"ID":"b791336a-d0a2-4a32-823d-6eb0dbc56438","Type":"ContainerStarted","Data":"f6ad3fe8e1472c640b25c8814842bfb803906bcfef916081ab0206268661d0e6"} Oct 14 08:57:03 crc kubenswrapper[5058]: I1014 08:57:03.655985 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:57:03 crc kubenswrapper[5058]: I1014 08:57:03.656477 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:57:04 crc kubenswrapper[5058]: I1014 08:57:04.689403 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fbsbw" Oct 14 08:57:04 crc kubenswrapper[5058]: I1014 08:57:04.785671 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhz7h\" (UniqueName: \"kubernetes.io/projected/b791336a-d0a2-4a32-823d-6eb0dbc56438-kube-api-access-jhz7h\") pod \"b791336a-d0a2-4a32-823d-6eb0dbc56438\" (UID: \"b791336a-d0a2-4a32-823d-6eb0dbc56438\") " Oct 14 08:57:04 crc kubenswrapper[5058]: I1014 08:57:04.795164 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b791336a-d0a2-4a32-823d-6eb0dbc56438-kube-api-access-jhz7h" (OuterVolumeSpecName: "kube-api-access-jhz7h") pod "b791336a-d0a2-4a32-823d-6eb0dbc56438" (UID: "b791336a-d0a2-4a32-823d-6eb0dbc56438"). InnerVolumeSpecName "kube-api-access-jhz7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:57:04 crc kubenswrapper[5058]: I1014 08:57:04.888247 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhz7h\" (UniqueName: \"kubernetes.io/projected/b791336a-d0a2-4a32-823d-6eb0dbc56438-kube-api-access-jhz7h\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:05 crc kubenswrapper[5058]: I1014 08:57:05.291470 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fbsbw" event={"ID":"b791336a-d0a2-4a32-823d-6eb0dbc56438","Type":"ContainerDied","Data":"f6ad3fe8e1472c640b25c8814842bfb803906bcfef916081ab0206268661d0e6"} Oct 14 08:57:05 crc kubenswrapper[5058]: I1014 08:57:05.291528 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ad3fe8e1472c640b25c8814842bfb803906bcfef916081ab0206268661d0e6" Oct 14 08:57:05 crc kubenswrapper[5058]: I1014 08:57:05.291565 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fbsbw" Oct 14 08:57:10 crc kubenswrapper[5058]: I1014 08:57:10.719378 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 14 08:57:11 crc kubenswrapper[5058]: I1014 08:57:11.895109 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a080-account-create-6djhc"] Oct 14 08:57:11 crc kubenswrapper[5058]: E1014 08:57:11.896122 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b791336a-d0a2-4a32-823d-6eb0dbc56438" containerName="mariadb-database-create" Oct 14 08:57:11 crc kubenswrapper[5058]: I1014 08:57:11.896156 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b791336a-d0a2-4a32-823d-6eb0dbc56438" containerName="mariadb-database-create" Oct 14 08:57:11 crc kubenswrapper[5058]: I1014 08:57:11.896513 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b791336a-d0a2-4a32-823d-6eb0dbc56438" containerName="mariadb-database-create" Oct 14 08:57:11 crc kubenswrapper[5058]: I1014 08:57:11.897531 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a080-account-create-6djhc" Oct 14 08:57:11 crc kubenswrapper[5058]: I1014 08:57:11.900182 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 14 08:57:11 crc kubenswrapper[5058]: I1014 08:57:11.909657 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a080-account-create-6djhc"] Oct 14 08:57:12 crc kubenswrapper[5058]: I1014 08:57:12.023266 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wspnz\" (UniqueName: \"kubernetes.io/projected/f6c7f99b-2aeb-44b2-aa8c-25e171bce768-kube-api-access-wspnz\") pod \"keystone-a080-account-create-6djhc\" (UID: \"f6c7f99b-2aeb-44b2-aa8c-25e171bce768\") " pod="openstack/keystone-a080-account-create-6djhc" Oct 14 08:57:12 crc kubenswrapper[5058]: I1014 08:57:12.125474 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wspnz\" (UniqueName: \"kubernetes.io/projected/f6c7f99b-2aeb-44b2-aa8c-25e171bce768-kube-api-access-wspnz\") pod \"keystone-a080-account-create-6djhc\" (UID: \"f6c7f99b-2aeb-44b2-aa8c-25e171bce768\") " pod="openstack/keystone-a080-account-create-6djhc" Oct 14 08:57:12 crc kubenswrapper[5058]: I1014 08:57:12.158650 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wspnz\" (UniqueName: \"kubernetes.io/projected/f6c7f99b-2aeb-44b2-aa8c-25e171bce768-kube-api-access-wspnz\") pod \"keystone-a080-account-create-6djhc\" (UID: \"f6c7f99b-2aeb-44b2-aa8c-25e171bce768\") " pod="openstack/keystone-a080-account-create-6djhc" Oct 14 08:57:12 crc kubenswrapper[5058]: I1014 08:57:12.230915 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a080-account-create-6djhc" Oct 14 08:57:12 crc kubenswrapper[5058]: I1014 08:57:12.604265 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a080-account-create-6djhc"] Oct 14 08:57:13 crc kubenswrapper[5058]: I1014 08:57:13.379447 5058 generic.go:334] "Generic (PLEG): container finished" podID="f6c7f99b-2aeb-44b2-aa8c-25e171bce768" containerID="9a1d7df0dc9a838ff712850419c9fec6ff0ae3e628b470e338345c6818d07c26" exitCode=0 Oct 14 08:57:13 crc kubenswrapper[5058]: I1014 08:57:13.379668 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a080-account-create-6djhc" event={"ID":"f6c7f99b-2aeb-44b2-aa8c-25e171bce768","Type":"ContainerDied","Data":"9a1d7df0dc9a838ff712850419c9fec6ff0ae3e628b470e338345c6818d07c26"} Oct 14 08:57:13 crc kubenswrapper[5058]: I1014 08:57:13.379941 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a080-account-create-6djhc" event={"ID":"f6c7f99b-2aeb-44b2-aa8c-25e171bce768","Type":"ContainerStarted","Data":"8e274f7b8759c2b148384b851f1795c1a7d603a7d58d3c72820b85eaa318f7fd"} Oct 14 08:57:14 crc kubenswrapper[5058]: I1014 08:57:14.773432 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a080-account-create-6djhc" Oct 14 08:57:14 crc kubenswrapper[5058]: I1014 08:57:14.881924 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wspnz\" (UniqueName: \"kubernetes.io/projected/f6c7f99b-2aeb-44b2-aa8c-25e171bce768-kube-api-access-wspnz\") pod \"f6c7f99b-2aeb-44b2-aa8c-25e171bce768\" (UID: \"f6c7f99b-2aeb-44b2-aa8c-25e171bce768\") " Oct 14 08:57:14 crc kubenswrapper[5058]: I1014 08:57:14.892313 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c7f99b-2aeb-44b2-aa8c-25e171bce768-kube-api-access-wspnz" (OuterVolumeSpecName: "kube-api-access-wspnz") pod "f6c7f99b-2aeb-44b2-aa8c-25e171bce768" (UID: "f6c7f99b-2aeb-44b2-aa8c-25e171bce768"). InnerVolumeSpecName "kube-api-access-wspnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:57:14 crc kubenswrapper[5058]: I1014 08:57:14.984186 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wspnz\" (UniqueName: \"kubernetes.io/projected/f6c7f99b-2aeb-44b2-aa8c-25e171bce768-kube-api-access-wspnz\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:15 crc kubenswrapper[5058]: I1014 08:57:15.405354 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a080-account-create-6djhc" event={"ID":"f6c7f99b-2aeb-44b2-aa8c-25e171bce768","Type":"ContainerDied","Data":"8e274f7b8759c2b148384b851f1795c1a7d603a7d58d3c72820b85eaa318f7fd"} Oct 14 08:57:15 crc kubenswrapper[5058]: I1014 08:57:15.405424 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e274f7b8759c2b148384b851f1795c1a7d603a7d58d3c72820b85eaa318f7fd" Oct 14 08:57:15 crc kubenswrapper[5058]: I1014 08:57:15.405527 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a080-account-create-6djhc" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.466569 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-jmlj2"] Oct 14 08:57:17 crc kubenswrapper[5058]: E1014 08:57:17.467508 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c7f99b-2aeb-44b2-aa8c-25e171bce768" containerName="mariadb-account-create" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.467534 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c7f99b-2aeb-44b2-aa8c-25e171bce768" containerName="mariadb-account-create" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.467831 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c7f99b-2aeb-44b2-aa8c-25e171bce768" containerName="mariadb-account-create" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.468934 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.471768 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.472016 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pndtw" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.472441 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.472738 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.475736 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jmlj2"] Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.535566 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-combined-ca-bundle\") pod \"keystone-db-sync-jmlj2\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.535618 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-config-data\") pod \"keystone-db-sync-jmlj2\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.535714 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnbrg\" (UniqueName: \"kubernetes.io/projected/cfdf27fe-36f5-4e62-9dcd-00f092497365-kube-api-access-tnbrg\") pod \"keystone-db-sync-jmlj2\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.637932 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-combined-ca-bundle\") pod \"keystone-db-sync-jmlj2\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.637985 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-config-data\") pod \"keystone-db-sync-jmlj2\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.638697 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbrg\" (UniqueName: \"kubernetes.io/projected/cfdf27fe-36f5-4e62-9dcd-00f092497365-kube-api-access-tnbrg\") pod \"keystone-db-sync-jmlj2\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.644667 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-config-data\") pod \"keystone-db-sync-jmlj2\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.651482 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-combined-ca-bundle\") pod \"keystone-db-sync-jmlj2\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.660763 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbrg\" (UniqueName: \"kubernetes.io/projected/cfdf27fe-36f5-4e62-9dcd-00f092497365-kube-api-access-tnbrg\") pod \"keystone-db-sync-jmlj2\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:17 crc kubenswrapper[5058]: I1014 08:57:17.788760 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:18 crc kubenswrapper[5058]: I1014 08:57:18.298679 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jmlj2"] Oct 14 08:57:18 crc kubenswrapper[5058]: W1014 08:57:18.304357 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfdf27fe_36f5_4e62_9dcd_00f092497365.slice/crio-e35411bca7412ea22ddd0eef2f1c19327eef19dda7650b25f9502357d9d9e4aa WatchSource:0}: Error finding container e35411bca7412ea22ddd0eef2f1c19327eef19dda7650b25f9502357d9d9e4aa: Status 404 returned error can't find the container with id e35411bca7412ea22ddd0eef2f1c19327eef19dda7650b25f9502357d9d9e4aa Oct 14 08:57:18 crc kubenswrapper[5058]: I1014 08:57:18.441000 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jmlj2" event={"ID":"cfdf27fe-36f5-4e62-9dcd-00f092497365","Type":"ContainerStarted","Data":"e35411bca7412ea22ddd0eef2f1c19327eef19dda7650b25f9502357d9d9e4aa"} Oct 14 08:57:23 crc kubenswrapper[5058]: I1014 08:57:23.495281 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jmlj2" event={"ID":"cfdf27fe-36f5-4e62-9dcd-00f092497365","Type":"ContainerStarted","Data":"ee38261062e87ceb3013524f80a5fdd142461e139ca0ea3c6ba7c88b9478b48d"} Oct 14 08:57:23 crc kubenswrapper[5058]: I1014 08:57:23.516464 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-jmlj2" podStartSLOduration=1.7483702719999998 podStartE2EDuration="6.516448206s" podCreationTimestamp="2025-10-14 08:57:17 +0000 UTC" firstStartedPulling="2025-10-14 08:57:18.308314696 +0000 UTC m=+7786.219398522" lastFinishedPulling="2025-10-14 08:57:23.07639263 +0000 UTC m=+7790.987476456" observedRunningTime="2025-10-14 08:57:23.515934071 +0000 UTC m=+7791.427017917" watchObservedRunningTime="2025-10-14 08:57:23.516448206 +0000 UTC m=+7791.427532012" Oct 14 08:57:25 crc kubenswrapper[5058]: E1014 08:57:25.096521 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfdf27fe_36f5_4e62_9dcd_00f092497365.slice/crio-ee38261062e87ceb3013524f80a5fdd142461e139ca0ea3c6ba7c88b9478b48d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfdf27fe_36f5_4e62_9dcd_00f092497365.slice/crio-conmon-ee38261062e87ceb3013524f80a5fdd142461e139ca0ea3c6ba7c88b9478b48d.scope\": RecentStats: unable to find data in memory cache]" Oct 14 08:57:25 crc kubenswrapper[5058]: I1014 08:57:25.557164 5058 generic.go:334] "Generic (PLEG): container finished" podID="cfdf27fe-36f5-4e62-9dcd-00f092497365" containerID="ee38261062e87ceb3013524f80a5fdd142461e139ca0ea3c6ba7c88b9478b48d" exitCode=0 Oct 14 08:57:25 crc kubenswrapper[5058]: I1014 08:57:25.557230 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jmlj2" event={"ID":"cfdf27fe-36f5-4e62-9dcd-00f092497365","Type":"ContainerDied","Data":"ee38261062e87ceb3013524f80a5fdd142461e139ca0ea3c6ba7c88b9478b48d"} Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.012218 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.129226 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-config-data\") pod \"cfdf27fe-36f5-4e62-9dcd-00f092497365\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.129360 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-combined-ca-bundle\") pod \"cfdf27fe-36f5-4e62-9dcd-00f092497365\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.129417 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnbrg\" (UniqueName: \"kubernetes.io/projected/cfdf27fe-36f5-4e62-9dcd-00f092497365-kube-api-access-tnbrg\") pod \"cfdf27fe-36f5-4e62-9dcd-00f092497365\" (UID: \"cfdf27fe-36f5-4e62-9dcd-00f092497365\") " Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.137257 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfdf27fe-36f5-4e62-9dcd-00f092497365-kube-api-access-tnbrg" (OuterVolumeSpecName: "kube-api-access-tnbrg") pod "cfdf27fe-36f5-4e62-9dcd-00f092497365" (UID: "cfdf27fe-36f5-4e62-9dcd-00f092497365"). InnerVolumeSpecName "kube-api-access-tnbrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.155517 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfdf27fe-36f5-4e62-9dcd-00f092497365" (UID: "cfdf27fe-36f5-4e62-9dcd-00f092497365"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.187721 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-config-data" (OuterVolumeSpecName: "config-data") pod "cfdf27fe-36f5-4e62-9dcd-00f092497365" (UID: "cfdf27fe-36f5-4e62-9dcd-00f092497365"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.231305 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.231342 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnbrg\" (UniqueName: \"kubernetes.io/projected/cfdf27fe-36f5-4e62-9dcd-00f092497365-kube-api-access-tnbrg\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.231353 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf27fe-36f5-4e62-9dcd-00f092497365-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.584360 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jmlj2" event={"ID":"cfdf27fe-36f5-4e62-9dcd-00f092497365","Type":"ContainerDied","Data":"e35411bca7412ea22ddd0eef2f1c19327eef19dda7650b25f9502357d9d9e4aa"} Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.584415 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35411bca7412ea22ddd0eef2f1c19327eef19dda7650b25f9502357d9d9e4aa" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.584464 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jmlj2" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.883700 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fbpv8"] Oct 14 08:57:27 crc kubenswrapper[5058]: E1014 08:57:27.884271 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfdf27fe-36f5-4e62-9dcd-00f092497365" containerName="keystone-db-sync" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.884285 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfdf27fe-36f5-4e62-9dcd-00f092497365" containerName="keystone-db-sync" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.884471 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfdf27fe-36f5-4e62-9dcd-00f092497365" containerName="keystone-db-sync" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.886227 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.892889 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pndtw" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.893105 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.893270 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.893408 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.899039 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67d6d6fbcf-lrxw8"] Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.900838 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.905412 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fbpv8"] Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.928980 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d6d6fbcf-lrxw8"] Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.943752 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-scripts\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.943806 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-config-data\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.943879 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-fernet-keys\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.943963 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnlnf\" (UniqueName: \"kubernetes.io/projected/f701b0c0-718d-4109-b394-e65b556c56e0-kube-api-access-qnlnf\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.943984 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-credential-keys\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:27 crc kubenswrapper[5058]: I1014 08:57:27.944109 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-combined-ca-bundle\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.045616 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-sb\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.045679 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-combined-ca-bundle\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.045815 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2psvw\" (UniqueName: \"kubernetes.io/projected/bbee7133-91ad-4fd0-9bb8-498a86d322e1-kube-api-access-2psvw\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.045910 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-nb\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.045990 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-config\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.046187 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-scripts\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.046237 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-config-data\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.046302 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-dns-svc\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.046329 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-fernet-keys\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.046371 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnlnf\" (UniqueName: \"kubernetes.io/projected/f701b0c0-718d-4109-b394-e65b556c56e0-kube-api-access-qnlnf\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.046397 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-credential-keys\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.053385 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-fernet-keys\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.054255 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-config-data\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.054351 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-combined-ca-bundle\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.058012 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-scripts\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.064518 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-credential-keys\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.073244 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnlnf\" (UniqueName: \"kubernetes.io/projected/f701b0c0-718d-4109-b394-e65b556c56e0-kube-api-access-qnlnf\") pod \"keystone-bootstrap-fbpv8\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.148222 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-config\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.148322 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-dns-svc\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.148359 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-sb\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.148393 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2psvw\" (UniqueName: \"kubernetes.io/projected/bbee7133-91ad-4fd0-9bb8-498a86d322e1-kube-api-access-2psvw\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.148416 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-nb\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.149265 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-nb\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.149785 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-config\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.150368 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-dns-svc\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.151160 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-sb\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.166984 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2psvw\" (UniqueName: \"kubernetes.io/projected/bbee7133-91ad-4fd0-9bb8-498a86d322e1-kube-api-access-2psvw\") pod \"dnsmasq-dns-67d6d6fbcf-lrxw8\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.216117 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.241011 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.733637 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fbpv8"] Oct 14 08:57:28 crc kubenswrapper[5058]: I1014 08:57:28.788178 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d6d6fbcf-lrxw8"] Oct 14 08:57:28 crc kubenswrapper[5058]: W1014 08:57:28.803445 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbee7133_91ad_4fd0_9bb8_498a86d322e1.slice/crio-44b548dfa7db4f009fde5bc87027911a082362ddb819a3a74fb426a96141edb7 WatchSource:0}: Error finding container 44b548dfa7db4f009fde5bc87027911a082362ddb819a3a74fb426a96141edb7: Status 404 returned error can't find the container with id 44b548dfa7db4f009fde5bc87027911a082362ddb819a3a74fb426a96141edb7 Oct 14 08:57:29 crc kubenswrapper[5058]: I1014 08:57:29.603690 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fbpv8" event={"ID":"f701b0c0-718d-4109-b394-e65b556c56e0","Type":"ContainerStarted","Data":"e01da2354a070e765096ff57b72a5372492233b3ea584e17d6c611b549e8458c"} Oct 14 08:57:29 crc kubenswrapper[5058]: I1014 08:57:29.605860 5058 generic.go:334] "Generic (PLEG): container finished" podID="bbee7133-91ad-4fd0-9bb8-498a86d322e1" containerID="650f6dfb7c8f882d31f318841aaedc09332942153903326f445b8fbbb0fd406d" exitCode=0 Oct 14 08:57:29 crc kubenswrapper[5058]: I1014 08:57:29.607846 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fbpv8" event={"ID":"f701b0c0-718d-4109-b394-e65b556c56e0","Type":"ContainerStarted","Data":"5b084911901f4bb5cb8580bc4516be6579544226276977d84ba03982880c0f77"} Oct 14 08:57:29 crc kubenswrapper[5058]: I1014 08:57:29.607871 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" event={"ID":"bbee7133-91ad-4fd0-9bb8-498a86d322e1","Type":"ContainerDied","Data":"650f6dfb7c8f882d31f318841aaedc09332942153903326f445b8fbbb0fd406d"} Oct 14 08:57:29 crc kubenswrapper[5058]: I1014 08:57:29.607887 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" event={"ID":"bbee7133-91ad-4fd0-9bb8-498a86d322e1","Type":"ContainerStarted","Data":"44b548dfa7db4f009fde5bc87027911a082362ddb819a3a74fb426a96141edb7"} Oct 14 08:57:29 crc kubenswrapper[5058]: I1014 08:57:29.683777 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fbpv8" podStartSLOduration=2.6837581520000002 podStartE2EDuration="2.683758152s" podCreationTimestamp="2025-10-14 08:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:57:29.65268844 +0000 UTC m=+7797.563772256" watchObservedRunningTime="2025-10-14 08:57:29.683758152 +0000 UTC m=+7797.594841968" Oct 14 08:57:30 crc kubenswrapper[5058]: I1014 08:57:30.662918 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" event={"ID":"bbee7133-91ad-4fd0-9bb8-498a86d322e1","Type":"ContainerStarted","Data":"c242c20aa903d8b345f37a83016ab71aa82dd8b8e96611bd1a039ce7e0654c1a"} Oct 14 08:57:30 crc kubenswrapper[5058]: I1014 08:57:30.664297 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:30 crc kubenswrapper[5058]: I1014 08:57:30.693561 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" podStartSLOduration=3.693541986 podStartE2EDuration="3.693541986s" podCreationTimestamp="2025-10-14 08:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:57:30.690677445 +0000 UTC m=+7798.601761261" watchObservedRunningTime="2025-10-14 08:57:30.693541986 +0000 UTC m=+7798.604625802" Oct 14 08:57:32 crc kubenswrapper[5058]: I1014 08:57:32.687732 5058 generic.go:334] "Generic (PLEG): container finished" podID="f701b0c0-718d-4109-b394-e65b556c56e0" containerID="e01da2354a070e765096ff57b72a5372492233b3ea584e17d6c611b549e8458c" exitCode=0 Oct 14 08:57:32 crc kubenswrapper[5058]: I1014 08:57:32.687828 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fbpv8" event={"ID":"f701b0c0-718d-4109-b394-e65b556c56e0","Type":"ContainerDied","Data":"e01da2354a070e765096ff57b72a5372492233b3ea584e17d6c611b549e8458c"} Oct 14 08:57:33 crc kubenswrapper[5058]: I1014 08:57:33.656363 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 08:57:33 crc kubenswrapper[5058]: I1014 08:57:33.656739 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 08:57:33 crc kubenswrapper[5058]: I1014 08:57:33.656826 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 08:57:33 crc kubenswrapper[5058]: I1014 08:57:33.657761 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 08:57:33 crc kubenswrapper[5058]: I1014 08:57:33.657897 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" gracePeriod=600 Oct 14 08:57:33 crc kubenswrapper[5058]: E1014 08:57:33.799556 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.106362 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.177486 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-scripts\") pod \"f701b0c0-718d-4109-b394-e65b556c56e0\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.177546 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnlnf\" (UniqueName: \"kubernetes.io/projected/f701b0c0-718d-4109-b394-e65b556c56e0-kube-api-access-qnlnf\") pod \"f701b0c0-718d-4109-b394-e65b556c56e0\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.177640 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-combined-ca-bundle\") pod \"f701b0c0-718d-4109-b394-e65b556c56e0\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.177727 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-fernet-keys\") pod \"f701b0c0-718d-4109-b394-e65b556c56e0\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.177848 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-config-data\") pod \"f701b0c0-718d-4109-b394-e65b556c56e0\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.177916 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-credential-keys\") pod \"f701b0c0-718d-4109-b394-e65b556c56e0\" (UID: \"f701b0c0-718d-4109-b394-e65b556c56e0\") " Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.185446 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f701b0c0-718d-4109-b394-e65b556c56e0" (UID: "f701b0c0-718d-4109-b394-e65b556c56e0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.186244 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f701b0c0-718d-4109-b394-e65b556c56e0" (UID: "f701b0c0-718d-4109-b394-e65b556c56e0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.186456 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-scripts" (OuterVolumeSpecName: "scripts") pod "f701b0c0-718d-4109-b394-e65b556c56e0" (UID: "f701b0c0-718d-4109-b394-e65b556c56e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.192469 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f701b0c0-718d-4109-b394-e65b556c56e0-kube-api-access-qnlnf" (OuterVolumeSpecName: "kube-api-access-qnlnf") pod "f701b0c0-718d-4109-b394-e65b556c56e0" (UID: "f701b0c0-718d-4109-b394-e65b556c56e0"). InnerVolumeSpecName "kube-api-access-qnlnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.227021 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f701b0c0-718d-4109-b394-e65b556c56e0" (UID: "f701b0c0-718d-4109-b394-e65b556c56e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.230650 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-config-data" (OuterVolumeSpecName: "config-data") pod "f701b0c0-718d-4109-b394-e65b556c56e0" (UID: "f701b0c0-718d-4109-b394-e65b556c56e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.290784 5058 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.290851 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.290864 5058 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.290877 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.290889 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnlnf\" (UniqueName: \"kubernetes.io/projected/f701b0c0-718d-4109-b394-e65b556c56e0-kube-api-access-qnlnf\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.290905 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f701b0c0-718d-4109-b394-e65b556c56e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.712839 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fbpv8" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.712834 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fbpv8" event={"ID":"f701b0c0-718d-4109-b394-e65b556c56e0","Type":"ContainerDied","Data":"5b084911901f4bb5cb8580bc4516be6579544226276977d84ba03982880c0f77"} Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.713019 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b084911901f4bb5cb8580bc4516be6579544226276977d84ba03982880c0f77" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.716898 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" exitCode=0 Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.716963 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1"} Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.717055 5058 scope.go:117] "RemoveContainer" containerID="1621207f350db180dc3c022749746475c52171ff861249e8bc80e83c5c095af6" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.718322 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:57:34 crc kubenswrapper[5058]: E1014 08:57:34.719333 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.819891 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fbpv8"] Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.822743 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fbpv8"] Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.891692 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k5mpt"] Oct 14 08:57:34 crc kubenswrapper[5058]: E1014 08:57:34.892312 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f701b0c0-718d-4109-b394-e65b556c56e0" containerName="keystone-bootstrap" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.892348 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f701b0c0-718d-4109-b394-e65b556c56e0" containerName="keystone-bootstrap" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.892589 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f701b0c0-718d-4109-b394-e65b556c56e0" containerName="keystone-bootstrap" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.893239 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.896456 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.896620 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.896843 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pndtw" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.898705 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 08:57:34 crc kubenswrapper[5058]: I1014 08:57:34.901662 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k5mpt"] Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.003781 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-config-data\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.004369 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-credential-keys\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.004439 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-scripts\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.004512 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwkt4\" (UniqueName: \"kubernetes.io/projected/60672787-5a75-4785-b50b-853a1ad16180-kube-api-access-bwkt4\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.004602 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-combined-ca-bundle\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.004682 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-fernet-keys\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.105865 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-fernet-keys\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.105914 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-config-data\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.105979 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-credential-keys\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.106013 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-scripts\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.106049 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwkt4\" (UniqueName: \"kubernetes.io/projected/60672787-5a75-4785-b50b-853a1ad16180-kube-api-access-bwkt4\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.106092 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-combined-ca-bundle\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.112055 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-credential-keys\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.112421 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-combined-ca-bundle\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.112733 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-config-data\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.112756 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-fernet-keys\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.113151 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-scripts\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.143443 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwkt4\" (UniqueName: \"kubernetes.io/projected/60672787-5a75-4785-b50b-853a1ad16180-kube-api-access-bwkt4\") pod \"keystone-bootstrap-k5mpt\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.267151 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:35 crc kubenswrapper[5058]: W1014 08:57:35.739886 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60672787_5a75_4785_b50b_853a1ad16180.slice/crio-1442413595af55ce36e24b147711469e9db29ffb1b0c3417900880b1d45dc484 WatchSource:0}: Error finding container 1442413595af55ce36e24b147711469e9db29ffb1b0c3417900880b1d45dc484: Status 404 returned error can't find the container with id 1442413595af55ce36e24b147711469e9db29ffb1b0c3417900880b1d45dc484 Oct 14 08:57:35 crc kubenswrapper[5058]: I1014 08:57:35.746437 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k5mpt"] Oct 14 08:57:36 crc kubenswrapper[5058]: I1014 08:57:36.745170 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k5mpt" event={"ID":"60672787-5a75-4785-b50b-853a1ad16180","Type":"ContainerStarted","Data":"d029ab76c501801cac65070088bf6bcc40cbcbb1d1bb6886707a0643897e3da5"} Oct 14 08:57:36 crc kubenswrapper[5058]: I1014 08:57:36.745588 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k5mpt" event={"ID":"60672787-5a75-4785-b50b-853a1ad16180","Type":"ContainerStarted","Data":"1442413595af55ce36e24b147711469e9db29ffb1b0c3417900880b1d45dc484"} Oct 14 08:57:36 crc kubenswrapper[5058]: I1014 08:57:36.781603 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k5mpt" podStartSLOduration=2.781573821 podStartE2EDuration="2.781573821s" podCreationTimestamp="2025-10-14 08:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:57:36.771214137 +0000 UTC m=+7804.682297973" watchObservedRunningTime="2025-10-14 08:57:36.781573821 +0000 UTC m=+7804.692657657" Oct 14 08:57:36 crc kubenswrapper[5058]: I1014 08:57:36.808976 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f701b0c0-718d-4109-b394-e65b556c56e0" path="/var/lib/kubelet/pods/f701b0c0-718d-4109-b394-e65b556c56e0/volumes" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.243448 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.353612 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dcffdbb49-8m8gf"] Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.353890 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" podUID="3cd1d393-3acf-4fca-9337-467f82477386" containerName="dnsmasq-dns" containerID="cri-o://6aeba1b43e5a5d2e7d8b0384a4d51ffda4f906c29251e649976518c999e290eb" gracePeriod=10 Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.770721 5058 generic.go:334] "Generic (PLEG): container finished" podID="60672787-5a75-4785-b50b-853a1ad16180" containerID="d029ab76c501801cac65070088bf6bcc40cbcbb1d1bb6886707a0643897e3da5" exitCode=0 Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.771071 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k5mpt" event={"ID":"60672787-5a75-4785-b50b-853a1ad16180","Type":"ContainerDied","Data":"d029ab76c501801cac65070088bf6bcc40cbcbb1d1bb6886707a0643897e3da5"} Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.773828 5058 generic.go:334] "Generic (PLEG): container finished" podID="3cd1d393-3acf-4fca-9337-467f82477386" containerID="6aeba1b43e5a5d2e7d8b0384a4d51ffda4f906c29251e649976518c999e290eb" exitCode=0 Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.773858 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" event={"ID":"3cd1d393-3acf-4fca-9337-467f82477386","Type":"ContainerDied","Data":"6aeba1b43e5a5d2e7d8b0384a4d51ffda4f906c29251e649976518c999e290eb"} Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.773910 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" event={"ID":"3cd1d393-3acf-4fca-9337-467f82477386","Type":"ContainerDied","Data":"808ebd92de06d621bcd82f2a69aec533eabaf8b69110db091fca7351589f2541"} Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.773923 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="808ebd92de06d621bcd82f2a69aec533eabaf8b69110db091fca7351589f2541" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.855224 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.885760 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-sb\") pod \"3cd1d393-3acf-4fca-9337-467f82477386\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.886002 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf7n4\" (UniqueName: \"kubernetes.io/projected/3cd1d393-3acf-4fca-9337-467f82477386-kube-api-access-nf7n4\") pod \"3cd1d393-3acf-4fca-9337-467f82477386\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.886031 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-nb\") pod \"3cd1d393-3acf-4fca-9337-467f82477386\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.886585 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-dns-svc\") pod \"3cd1d393-3acf-4fca-9337-467f82477386\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.886633 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-config\") pod \"3cd1d393-3acf-4fca-9337-467f82477386\" (UID: \"3cd1d393-3acf-4fca-9337-467f82477386\") " Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.896320 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cd1d393-3acf-4fca-9337-467f82477386-kube-api-access-nf7n4" (OuterVolumeSpecName: "kube-api-access-nf7n4") pod "3cd1d393-3acf-4fca-9337-467f82477386" (UID: "3cd1d393-3acf-4fca-9337-467f82477386"). InnerVolumeSpecName "kube-api-access-nf7n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.943495 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3cd1d393-3acf-4fca-9337-467f82477386" (UID: "3cd1d393-3acf-4fca-9337-467f82477386"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.944464 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3cd1d393-3acf-4fca-9337-467f82477386" (UID: "3cd1d393-3acf-4fca-9337-467f82477386"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.947363 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3cd1d393-3acf-4fca-9337-467f82477386" (UID: "3cd1d393-3acf-4fca-9337-467f82477386"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.960780 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-config" (OuterVolumeSpecName: "config") pod "3cd1d393-3acf-4fca-9337-467f82477386" (UID: "3cd1d393-3acf-4fca-9337-467f82477386"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.989449 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.989695 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf7n4\" (UniqueName: \"kubernetes.io/projected/3cd1d393-3acf-4fca-9337-467f82477386-kube-api-access-nf7n4\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.989806 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.989897 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:38 crc kubenswrapper[5058]: I1014 08:57:38.989989 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd1d393-3acf-4fca-9337-467f82477386-config\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:39 crc kubenswrapper[5058]: I1014 08:57:39.786238 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dcffdbb49-8m8gf" Oct 14 08:57:39 crc kubenswrapper[5058]: I1014 08:57:39.851582 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dcffdbb49-8m8gf"] Oct 14 08:57:39 crc kubenswrapper[5058]: I1014 08:57:39.864030 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dcffdbb49-8m8gf"] Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.362426 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.444453 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-combined-ca-bundle\") pod \"60672787-5a75-4785-b50b-853a1ad16180\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.444550 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-fernet-keys\") pod \"60672787-5a75-4785-b50b-853a1ad16180\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.444600 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-config-data\") pod \"60672787-5a75-4785-b50b-853a1ad16180\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.444672 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-credential-keys\") pod \"60672787-5a75-4785-b50b-853a1ad16180\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.444705 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwkt4\" (UniqueName: \"kubernetes.io/projected/60672787-5a75-4785-b50b-853a1ad16180-kube-api-access-bwkt4\") pod \"60672787-5a75-4785-b50b-853a1ad16180\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.444899 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-scripts\") pod \"60672787-5a75-4785-b50b-853a1ad16180\" (UID: \"60672787-5a75-4785-b50b-853a1ad16180\") " Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.451080 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "60672787-5a75-4785-b50b-853a1ad16180" (UID: "60672787-5a75-4785-b50b-853a1ad16180"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.451182 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-scripts" (OuterVolumeSpecName: "scripts") pod "60672787-5a75-4785-b50b-853a1ad16180" (UID: "60672787-5a75-4785-b50b-853a1ad16180"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.451618 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60672787-5a75-4785-b50b-853a1ad16180-kube-api-access-bwkt4" (OuterVolumeSpecName: "kube-api-access-bwkt4") pod "60672787-5a75-4785-b50b-853a1ad16180" (UID: "60672787-5a75-4785-b50b-853a1ad16180"). InnerVolumeSpecName "kube-api-access-bwkt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.469202 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "60672787-5a75-4785-b50b-853a1ad16180" (UID: "60672787-5a75-4785-b50b-853a1ad16180"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.474971 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60672787-5a75-4785-b50b-853a1ad16180" (UID: "60672787-5a75-4785-b50b-853a1ad16180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.491951 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-config-data" (OuterVolumeSpecName: "config-data") pod "60672787-5a75-4785-b50b-853a1ad16180" (UID: "60672787-5a75-4785-b50b-853a1ad16180"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.547245 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.547562 5058 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.547771 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.547962 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwkt4\" (UniqueName: \"kubernetes.io/projected/60672787-5a75-4785-b50b-853a1ad16180-kube-api-access-bwkt4\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.548115 5058 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.548261 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60672787-5a75-4785-b50b-853a1ad16180-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.805360 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k5mpt" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.808077 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cd1d393-3acf-4fca-9337-467f82477386" path="/var/lib/kubelet/pods/3cd1d393-3acf-4fca-9337-467f82477386/volumes" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.808943 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k5mpt" event={"ID":"60672787-5a75-4785-b50b-853a1ad16180","Type":"ContainerDied","Data":"1442413595af55ce36e24b147711469e9db29ffb1b0c3417900880b1d45dc484"} Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.808972 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1442413595af55ce36e24b147711469e9db29ffb1b0c3417900880b1d45dc484" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.909781 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-64c6487b4d-qstgs"] Oct 14 08:57:40 crc kubenswrapper[5058]: E1014 08:57:40.910448 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60672787-5a75-4785-b50b-853a1ad16180" containerName="keystone-bootstrap" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.910489 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="60672787-5a75-4785-b50b-853a1ad16180" containerName="keystone-bootstrap" Oct 14 08:57:40 crc kubenswrapper[5058]: E1014 08:57:40.910533 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd1d393-3acf-4fca-9337-467f82477386" containerName="init" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.910550 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd1d393-3acf-4fca-9337-467f82477386" containerName="init" Oct 14 08:57:40 crc kubenswrapper[5058]: E1014 08:57:40.910581 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd1d393-3acf-4fca-9337-467f82477386" containerName="dnsmasq-dns" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.910594 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd1d393-3acf-4fca-9337-467f82477386" containerName="dnsmasq-dns" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.911002 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cd1d393-3acf-4fca-9337-467f82477386" containerName="dnsmasq-dns" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.911051 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="60672787-5a75-4785-b50b-853a1ad16180" containerName="keystone-bootstrap" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.915322 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.919654 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pndtw" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.922725 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.922865 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.930154 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 08:57:40 crc kubenswrapper[5058]: I1014 08:57:40.933708 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64c6487b4d-qstgs"] Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.057940 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-credential-keys\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.058226 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-scripts\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.058345 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-fernet-keys\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.058442 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9x7\" (UniqueName: \"kubernetes.io/projected/68efc181-c994-47af-a145-385f9670b8b8-kube-api-access-vn9x7\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.058514 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-combined-ca-bundle\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.058836 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-config-data\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.160644 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-scripts\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.160753 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-fernet-keys\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.160791 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn9x7\" (UniqueName: \"kubernetes.io/projected/68efc181-c994-47af-a145-385f9670b8b8-kube-api-access-vn9x7\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.160845 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-combined-ca-bundle\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.160881 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-config-data\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.160956 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-credential-keys\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.167705 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-fernet-keys\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.167780 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-credential-keys\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.169168 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-config-data\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.170933 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-combined-ca-bundle\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.174699 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68efc181-c994-47af-a145-385f9670b8b8-scripts\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.191083 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn9x7\" (UniqueName: \"kubernetes.io/projected/68efc181-c994-47af-a145-385f9670b8b8-kube-api-access-vn9x7\") pod \"keystone-64c6487b4d-qstgs\" (UID: \"68efc181-c994-47af-a145-385f9670b8b8\") " pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.241999 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.601943 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64c6487b4d-qstgs"] Oct 14 08:57:41 crc kubenswrapper[5058]: I1014 08:57:41.818958 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64c6487b4d-qstgs" event={"ID":"68efc181-c994-47af-a145-385f9670b8b8","Type":"ContainerStarted","Data":"1dfa65ab01fb34de9c886d06b1fb63044f8d4aa8510c6e6af9c4a631444e43cc"} Oct 14 08:57:42 crc kubenswrapper[5058]: I1014 08:57:42.835135 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64c6487b4d-qstgs" event={"ID":"68efc181-c994-47af-a145-385f9670b8b8","Type":"ContainerStarted","Data":"2e0c79c0f985a0b9539d84f37ddde14dca04bac6f4bb951a23d3ab3180470dd0"} Oct 14 08:57:42 crc kubenswrapper[5058]: I1014 08:57:42.836128 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:57:42 crc kubenswrapper[5058]: I1014 08:57:42.874207 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-64c6487b4d-qstgs" podStartSLOduration=2.874174105 podStartE2EDuration="2.874174105s" podCreationTimestamp="2025-10-14 08:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 08:57:42.863213494 +0000 UTC m=+7810.774297330" watchObservedRunningTime="2025-10-14 08:57:42.874174105 +0000 UTC m=+7810.785257951" Oct 14 08:57:47 crc kubenswrapper[5058]: I1014 08:57:47.789760 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:57:47 crc kubenswrapper[5058]: E1014 08:57:47.790285 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:57:58 crc kubenswrapper[5058]: I1014 08:57:58.790468 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:57:58 crc kubenswrapper[5058]: E1014 08:57:58.791303 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:58:12 crc kubenswrapper[5058]: I1014 08:58:12.734659 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-64c6487b4d-qstgs" Oct 14 08:58:13 crc kubenswrapper[5058]: I1014 08:58:13.790277 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:58:13 crc kubenswrapper[5058]: E1014 08:58:13.790889 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.935147 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.937389 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.943065 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.943308 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.945956 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.956744 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-gbhft" Oct 14 08:58:16 crc kubenswrapper[5058]: E1014 08:58:16.959046 5058 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstackclient-openstackclient-dockercfg-gbhft\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.969554 5058 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"741fa514-00b4-49e2-be04-c8f78f9f578b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T08:58:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T08:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T08:58:16Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T08:58:16Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:0468cb21803d466b2abfe00835cf1d2d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t4td\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T08:58:16Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.971736 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 14 08:58:16 crc kubenswrapper[5058]: E1014 08:58:16.972465 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-7t4td openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[kube-api-access-7t4td openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="741fa514-00b4-49e2-be04-c8f78f9f578b" Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.985601 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.994710 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 08:58:16 crc kubenswrapper[5058]: I1014 08:58:16.996108 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.002405 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="741fa514-00b4-49e2-be04-c8f78f9f578b" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.002776 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.073134 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t4td\" (UniqueName: \"kubernetes.io/projected/741fa514-00b4-49e2-be04-c8f78f9f578b-kube-api-access-7t4td\") pod \"openstackclient\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.073180 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config\") pod \"openstackclient\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.073653 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config-secret\") pod \"openstackclient\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.175660 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config\") pod \"openstackclient\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.175716 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config-secret\") pod \"openstackclient\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.175777 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config-secret\") pod \"openstackclient\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.175851 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t4td\" (UniqueName: \"kubernetes.io/projected/741fa514-00b4-49e2-be04-c8f78f9f578b-kube-api-access-7t4td\") pod \"openstackclient\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.175879 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config\") pod \"openstackclient\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.175965 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwjjm\" (UniqueName: \"kubernetes.io/projected/49766830-cc2e-4d82-9a6d-a30bc53fa60b-kube-api-access-jwjjm\") pod \"openstackclient\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.176720 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config\") pod \"openstackclient\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: E1014 08:58:17.179558 5058 projected.go:194] Error preparing data for projected volume kube-api-access-7t4td for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (741fa514-00b4-49e2-be04-c8f78f9f578b) does not match the UID in record. The object might have been deleted and then recreated Oct 14 08:58:17 crc kubenswrapper[5058]: E1014 08:58:17.179668 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/741fa514-00b4-49e2-be04-c8f78f9f578b-kube-api-access-7t4td podName:741fa514-00b4-49e2-be04-c8f78f9f578b nodeName:}" failed. No retries permitted until 2025-10-14 08:58:17.679640852 +0000 UTC m=+7845.590724688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7t4td" (UniqueName: "kubernetes.io/projected/741fa514-00b4-49e2-be04-c8f78f9f578b-kube-api-access-7t4td") pod "openstackclient" (UID: "741fa514-00b4-49e2-be04-c8f78f9f578b") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (741fa514-00b4-49e2-be04-c8f78f9f578b) does not match the UID in record. The object might have been deleted and then recreated Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.183752 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config-secret\") pod \"openstackclient\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.256051 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.261035 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="741fa514-00b4-49e2-be04-c8f78f9f578b" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.267848 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.272718 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="741fa514-00b4-49e2-be04-c8f78f9f578b" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.277936 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwjjm\" (UniqueName: \"kubernetes.io/projected/49766830-cc2e-4d82-9a6d-a30bc53fa60b-kube-api-access-jwjjm\") pod \"openstackclient\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.278075 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config\") pod \"openstackclient\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.278126 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config-secret\") pod \"openstackclient\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.279095 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config\") pod \"openstackclient\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.285523 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config-secret\") pod \"openstackclient\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.306098 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwjjm\" (UniqueName: \"kubernetes.io/projected/49766830-cc2e-4d82-9a6d-a30bc53fa60b-kube-api-access-jwjjm\") pod \"openstackclient\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.322816 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.381419 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config\") pod \"741fa514-00b4-49e2-be04-c8f78f9f578b\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.381874 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config-secret\") pod \"741fa514-00b4-49e2-be04-c8f78f9f578b\" (UID: \"741fa514-00b4-49e2-be04-c8f78f9f578b\") " Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.382255 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "741fa514-00b4-49e2-be04-c8f78f9f578b" (UID: "741fa514-00b4-49e2-be04-c8f78f9f578b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.383090 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.383128 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t4td\" (UniqueName: \"kubernetes.io/projected/741fa514-00b4-49e2-be04-c8f78f9f578b-kube-api-access-7t4td\") on node \"crc\" DevicePath \"\"" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.389095 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "741fa514-00b4-49e2-be04-c8f78f9f578b" (UID: "741fa514-00b4-49e2-be04-c8f78f9f578b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.484559 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/741fa514-00b4-49e2-be04-c8f78f9f578b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.621105 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 08:58:17 crc kubenswrapper[5058]: W1014 08:58:17.635602 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49766830_cc2e_4d82_9a6d_a30bc53fa60b.slice/crio-d0ec1f206a6ecffdf9242ecf1e77cb8ae75eb0ea273d71522fe10e71e03481c8 WatchSource:0}: Error finding container d0ec1f206a6ecffdf9242ecf1e77cb8ae75eb0ea273d71522fe10e71e03481c8: Status 404 returned error can't find the container with id d0ec1f206a6ecffdf9242ecf1e77cb8ae75eb0ea273d71522fe10e71e03481c8 Oct 14 08:58:17 crc kubenswrapper[5058]: I1014 08:58:17.770109 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-gbhft" Oct 14 08:58:18 crc kubenswrapper[5058]: I1014 08:58:18.265814 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"49766830-cc2e-4d82-9a6d-a30bc53fa60b","Type":"ContainerStarted","Data":"d0ec1f206a6ecffdf9242ecf1e77cb8ae75eb0ea273d71522fe10e71e03481c8"} Oct 14 08:58:18 crc kubenswrapper[5058]: I1014 08:58:18.265843 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 08:58:18 crc kubenswrapper[5058]: I1014 08:58:18.269117 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="741fa514-00b4-49e2-be04-c8f78f9f578b" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" Oct 14 08:58:18 crc kubenswrapper[5058]: I1014 08:58:18.279498 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="741fa514-00b4-49e2-be04-c8f78f9f578b" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" Oct 14 08:58:18 crc kubenswrapper[5058]: I1014 08:58:18.807561 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741fa514-00b4-49e2-be04-c8f78f9f578b" path="/var/lib/kubelet/pods/741fa514-00b4-49e2-be04-c8f78f9f578b/volumes" Oct 14 08:58:27 crc kubenswrapper[5058]: I1014 08:58:27.789926 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:58:27 crc kubenswrapper[5058]: E1014 08:58:27.790615 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:58:29 crc kubenswrapper[5058]: I1014 08:58:29.382626 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"49766830-cc2e-4d82-9a6d-a30bc53fa60b","Type":"ContainerStarted","Data":"a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75"} Oct 14 08:58:29 crc kubenswrapper[5058]: I1014 08:58:29.402201 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.8309916250000002 podStartE2EDuration="13.402182241s" podCreationTimestamp="2025-10-14 08:58:16 +0000 UTC" firstStartedPulling="2025-10-14 08:58:17.638884281 +0000 UTC m=+7845.549968107" lastFinishedPulling="2025-10-14 08:58:28.210074917 +0000 UTC m=+7856.121158723" observedRunningTime="2025-10-14 08:58:29.399461664 +0000 UTC m=+7857.310545510" watchObservedRunningTime="2025-10-14 08:58:29.402182241 +0000 UTC m=+7857.313266057" Oct 14 08:58:39 crc kubenswrapper[5058]: I1014 08:58:39.790881 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:58:39 crc kubenswrapper[5058]: E1014 08:58:39.791895 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:58:54 crc kubenswrapper[5058]: I1014 08:58:54.790010 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:58:54 crc kubenswrapper[5058]: E1014 08:58:54.791095 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:59:09 crc kubenswrapper[5058]: I1014 08:59:09.790119 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:59:09 crc kubenswrapper[5058]: E1014 08:59:09.791101 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:59:23 crc kubenswrapper[5058]: I1014 08:59:23.789834 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:59:23 crc kubenswrapper[5058]: E1014 08:59:23.790750 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:59:34 crc kubenswrapper[5058]: I1014 08:59:34.789507 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:59:34 crc kubenswrapper[5058]: E1014 08:59:34.790228 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:59:46 crc kubenswrapper[5058]: I1014 08:59:46.789708 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 08:59:46 crc kubenswrapper[5058]: E1014 08:59:46.790595 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 08:59:54 crc kubenswrapper[5058]: I1014 08:59:54.938674 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-92rfb"] Oct 14 08:59:54 crc kubenswrapper[5058]: I1014 08:59:54.940176 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-92rfb" Oct 14 08:59:54 crc kubenswrapper[5058]: I1014 08:59:54.949608 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-92rfb"] Oct 14 08:59:55 crc kubenswrapper[5058]: I1014 08:59:55.013229 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95k5t\" (UniqueName: \"kubernetes.io/projected/20a4c6ec-719f-48a2-b490-3883988e0a82-kube-api-access-95k5t\") pod \"barbican-db-create-92rfb\" (UID: \"20a4c6ec-719f-48a2-b490-3883988e0a82\") " pod="openstack/barbican-db-create-92rfb" Oct 14 08:59:55 crc kubenswrapper[5058]: I1014 08:59:55.115987 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95k5t\" (UniqueName: \"kubernetes.io/projected/20a4c6ec-719f-48a2-b490-3883988e0a82-kube-api-access-95k5t\") pod \"barbican-db-create-92rfb\" (UID: \"20a4c6ec-719f-48a2-b490-3883988e0a82\") " pod="openstack/barbican-db-create-92rfb" Oct 14 08:59:55 crc kubenswrapper[5058]: I1014 08:59:55.153183 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95k5t\" (UniqueName: \"kubernetes.io/projected/20a4c6ec-719f-48a2-b490-3883988e0a82-kube-api-access-95k5t\") pod \"barbican-db-create-92rfb\" (UID: \"20a4c6ec-719f-48a2-b490-3883988e0a82\") " pod="openstack/barbican-db-create-92rfb" Oct 14 08:59:55 crc kubenswrapper[5058]: I1014 08:59:55.273784 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-92rfb" Oct 14 08:59:55 crc kubenswrapper[5058]: I1014 08:59:55.750504 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-92rfb"] Oct 14 08:59:56 crc kubenswrapper[5058]: I1014 08:59:56.306421 5058 generic.go:334] "Generic (PLEG): container finished" podID="20a4c6ec-719f-48a2-b490-3883988e0a82" containerID="5ead1de5653c9deaa4d6d21dc5d9db0e8320dc43b22fdd379d241528bb7b86cd" exitCode=0 Oct 14 08:59:56 crc kubenswrapper[5058]: I1014 08:59:56.306507 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-92rfb" event={"ID":"20a4c6ec-719f-48a2-b490-3883988e0a82","Type":"ContainerDied","Data":"5ead1de5653c9deaa4d6d21dc5d9db0e8320dc43b22fdd379d241528bb7b86cd"} Oct 14 08:59:56 crc kubenswrapper[5058]: I1014 08:59:56.306691 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-92rfb" event={"ID":"20a4c6ec-719f-48a2-b490-3883988e0a82","Type":"ContainerStarted","Data":"f7684c25c81351ff892bdf95c4ba16003118f035c893f9e06ae7c8828f0fd2bc"} Oct 14 08:59:57 crc kubenswrapper[5058]: I1014 08:59:57.732530 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-92rfb" Oct 14 08:59:57 crc kubenswrapper[5058]: I1014 08:59:57.883044 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95k5t\" (UniqueName: \"kubernetes.io/projected/20a4c6ec-719f-48a2-b490-3883988e0a82-kube-api-access-95k5t\") pod \"20a4c6ec-719f-48a2-b490-3883988e0a82\" (UID: \"20a4c6ec-719f-48a2-b490-3883988e0a82\") " Oct 14 08:59:57 crc kubenswrapper[5058]: I1014 08:59:57.891136 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a4c6ec-719f-48a2-b490-3883988e0a82-kube-api-access-95k5t" (OuterVolumeSpecName: "kube-api-access-95k5t") pod "20a4c6ec-719f-48a2-b490-3883988e0a82" (UID: "20a4c6ec-719f-48a2-b490-3883988e0a82"). InnerVolumeSpecName "kube-api-access-95k5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 08:59:57 crc kubenswrapper[5058]: I1014 08:59:57.985171 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95k5t\" (UniqueName: \"kubernetes.io/projected/20a4c6ec-719f-48a2-b490-3883988e0a82-kube-api-access-95k5t\") on node \"crc\" DevicePath \"\"" Oct 14 08:59:58 crc kubenswrapper[5058]: I1014 08:59:58.334147 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-92rfb" event={"ID":"20a4c6ec-719f-48a2-b490-3883988e0a82","Type":"ContainerDied","Data":"f7684c25c81351ff892bdf95c4ba16003118f035c893f9e06ae7c8828f0fd2bc"} Oct 14 08:59:58 crc kubenswrapper[5058]: I1014 08:59:58.334201 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7684c25c81351ff892bdf95c4ba16003118f035c893f9e06ae7c8828f0fd2bc" Oct 14 08:59:58 crc kubenswrapper[5058]: I1014 08:59:58.334228 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-92rfb" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.134389 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98"] Oct 14 09:00:00 crc kubenswrapper[5058]: E1014 09:00:00.135553 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a4c6ec-719f-48a2-b490-3883988e0a82" containerName="mariadb-database-create" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.135583 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a4c6ec-719f-48a2-b490-3883988e0a82" containerName="mariadb-database-create" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.135964 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a4c6ec-719f-48a2-b490-3883988e0a82" containerName="mariadb-database-create" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.137013 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.141984 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.141984 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.143030 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98"] Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.224720 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbl6n\" (UniqueName: \"kubernetes.io/projected/4b947c4d-a2b8-4607-adb9-e54db3b99255-kube-api-access-kbl6n\") pod \"collect-profiles-29340540-x4m98\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.224825 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b947c4d-a2b8-4607-adb9-e54db3b99255-config-volume\") pod \"collect-profiles-29340540-x4m98\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.224872 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b947c4d-a2b8-4607-adb9-e54db3b99255-secret-volume\") pod \"collect-profiles-29340540-x4m98\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.326951 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b947c4d-a2b8-4607-adb9-e54db3b99255-config-volume\") pod \"collect-profiles-29340540-x4m98\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.327012 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b947c4d-a2b8-4607-adb9-e54db3b99255-secret-volume\") pod \"collect-profiles-29340540-x4m98\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.327125 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbl6n\" (UniqueName: \"kubernetes.io/projected/4b947c4d-a2b8-4607-adb9-e54db3b99255-kube-api-access-kbl6n\") pod \"collect-profiles-29340540-x4m98\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.328161 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b947c4d-a2b8-4607-adb9-e54db3b99255-config-volume\") pod \"collect-profiles-29340540-x4m98\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.338050 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b947c4d-a2b8-4607-adb9-e54db3b99255-secret-volume\") pod \"collect-profiles-29340540-x4m98\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.348361 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbl6n\" (UniqueName: \"kubernetes.io/projected/4b947c4d-a2b8-4607-adb9-e54db3b99255-kube-api-access-kbl6n\") pod \"collect-profiles-29340540-x4m98\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.459002 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:00 crc kubenswrapper[5058]: I1014 09:00:00.742001 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98"] Oct 14 09:00:00 crc kubenswrapper[5058]: W1014 09:00:00.754204 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b947c4d_a2b8_4607_adb9_e54db3b99255.slice/crio-a17fe68dc94fca92679ae2c0ed87480fbc57cc52f3d85e0a0c1122be404b884d WatchSource:0}: Error finding container a17fe68dc94fca92679ae2c0ed87480fbc57cc52f3d85e0a0c1122be404b884d: Status 404 returned error can't find the container with id a17fe68dc94fca92679ae2c0ed87480fbc57cc52f3d85e0a0c1122be404b884d Oct 14 09:00:01 crc kubenswrapper[5058]: I1014 09:00:01.367903 5058 generic.go:334] "Generic (PLEG): container finished" podID="4b947c4d-a2b8-4607-adb9-e54db3b99255" containerID="f566e96ce24926c0dcbf38d0bf09e1229a5d56dd12210924e7c37ebbe451674b" exitCode=0 Oct 14 09:00:01 crc kubenswrapper[5058]: I1014 09:00:01.367962 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" event={"ID":"4b947c4d-a2b8-4607-adb9-e54db3b99255","Type":"ContainerDied","Data":"f566e96ce24926c0dcbf38d0bf09e1229a5d56dd12210924e7c37ebbe451674b"} Oct 14 09:00:01 crc kubenswrapper[5058]: I1014 09:00:01.368001 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" event={"ID":"4b947c4d-a2b8-4607-adb9-e54db3b99255","Type":"ContainerStarted","Data":"a17fe68dc94fca92679ae2c0ed87480fbc57cc52f3d85e0a0c1122be404b884d"} Oct 14 09:00:01 crc kubenswrapper[5058]: I1014 09:00:01.790993 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:00:01 crc kubenswrapper[5058]: E1014 09:00:01.791472 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.777976 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.872675 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b947c4d-a2b8-4607-adb9-e54db3b99255-secret-volume\") pod \"4b947c4d-a2b8-4607-adb9-e54db3b99255\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.872971 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbl6n\" (UniqueName: \"kubernetes.io/projected/4b947c4d-a2b8-4607-adb9-e54db3b99255-kube-api-access-kbl6n\") pod \"4b947c4d-a2b8-4607-adb9-e54db3b99255\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.873217 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b947c4d-a2b8-4607-adb9-e54db3b99255-config-volume\") pod \"4b947c4d-a2b8-4607-adb9-e54db3b99255\" (UID: \"4b947c4d-a2b8-4607-adb9-e54db3b99255\") " Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.874965 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b947c4d-a2b8-4607-adb9-e54db3b99255-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b947c4d-a2b8-4607-adb9-e54db3b99255" (UID: "4b947c4d-a2b8-4607-adb9-e54db3b99255"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.880410 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b947c4d-a2b8-4607-adb9-e54db3b99255-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b947c4d-a2b8-4607-adb9-e54db3b99255" (UID: "4b947c4d-a2b8-4607-adb9-e54db3b99255"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.880725 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b947c4d-a2b8-4607-adb9-e54db3b99255-kube-api-access-kbl6n" (OuterVolumeSpecName: "kube-api-access-kbl6n") pod "4b947c4d-a2b8-4607-adb9-e54db3b99255" (UID: "4b947c4d-a2b8-4607-adb9-e54db3b99255"). InnerVolumeSpecName "kube-api-access-kbl6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.976506 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbl6n\" (UniqueName: \"kubernetes.io/projected/4b947c4d-a2b8-4607-adb9-e54db3b99255-kube-api-access-kbl6n\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.976550 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b947c4d-a2b8-4607-adb9-e54db3b99255-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:02 crc kubenswrapper[5058]: I1014 09:00:02.976563 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b947c4d-a2b8-4607-adb9-e54db3b99255-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:03 crc kubenswrapper[5058]: I1014 09:00:03.388019 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" event={"ID":"4b947c4d-a2b8-4607-adb9-e54db3b99255","Type":"ContainerDied","Data":"a17fe68dc94fca92679ae2c0ed87480fbc57cc52f3d85e0a0c1122be404b884d"} Oct 14 09:00:03 crc kubenswrapper[5058]: I1014 09:00:03.388112 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98" Oct 14 09:00:03 crc kubenswrapper[5058]: I1014 09:00:03.388128 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17fe68dc94fca92679ae2c0ed87480fbc57cc52f3d85e0a0c1122be404b884d" Oct 14 09:00:03 crc kubenswrapper[5058]: I1014 09:00:03.840611 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk"] Oct 14 09:00:03 crc kubenswrapper[5058]: I1014 09:00:03.846999 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340495-9p2dk"] Oct 14 09:00:04 crc kubenswrapper[5058]: I1014 09:00:04.806255 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e56306-4fb5-4641-a8c1-3b9846da69e4" path="/var/lib/kubelet/pods/d4e56306-4fb5-4641-a8c1-3b9846da69e4/volumes" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.029661 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8bb9-account-create-f48gz"] Oct 14 09:00:05 crc kubenswrapper[5058]: E1014 09:00:05.030532 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b947c4d-a2b8-4607-adb9-e54db3b99255" containerName="collect-profiles" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.030607 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b947c4d-a2b8-4607-adb9-e54db3b99255" containerName="collect-profiles" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.030881 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b947c4d-a2b8-4607-adb9-e54db3b99255" containerName="collect-profiles" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.031506 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb9-account-create-f48gz" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.035496 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.047106 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8bb9-account-create-f48gz"] Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.135430 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p99lm\" (UniqueName: \"kubernetes.io/projected/aec9b5eb-4e30-4901-bf63-4a4cbd0307cc-kube-api-access-p99lm\") pod \"barbican-8bb9-account-create-f48gz\" (UID: \"aec9b5eb-4e30-4901-bf63-4a4cbd0307cc\") " pod="openstack/barbican-8bb9-account-create-f48gz" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.237457 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p99lm\" (UniqueName: \"kubernetes.io/projected/aec9b5eb-4e30-4901-bf63-4a4cbd0307cc-kube-api-access-p99lm\") pod \"barbican-8bb9-account-create-f48gz\" (UID: \"aec9b5eb-4e30-4901-bf63-4a4cbd0307cc\") " pod="openstack/barbican-8bb9-account-create-f48gz" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.269167 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p99lm\" (UniqueName: \"kubernetes.io/projected/aec9b5eb-4e30-4901-bf63-4a4cbd0307cc-kube-api-access-p99lm\") pod \"barbican-8bb9-account-create-f48gz\" (UID: \"aec9b5eb-4e30-4901-bf63-4a4cbd0307cc\") " pod="openstack/barbican-8bb9-account-create-f48gz" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.358468 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb9-account-create-f48gz" Oct 14 09:00:05 crc kubenswrapper[5058]: I1014 09:00:05.875740 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8bb9-account-create-f48gz"] Oct 14 09:00:06 crc kubenswrapper[5058]: I1014 09:00:06.418738 5058 generic.go:334] "Generic (PLEG): container finished" podID="aec9b5eb-4e30-4901-bf63-4a4cbd0307cc" containerID="026cc49e2d48700a17830e95235358399edc9f659ed2483bd2d04eeaeceabb3a" exitCode=0 Oct 14 09:00:06 crc kubenswrapper[5058]: I1014 09:00:06.418785 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb9-account-create-f48gz" event={"ID":"aec9b5eb-4e30-4901-bf63-4a4cbd0307cc","Type":"ContainerDied","Data":"026cc49e2d48700a17830e95235358399edc9f659ed2483bd2d04eeaeceabb3a"} Oct 14 09:00:06 crc kubenswrapper[5058]: I1014 09:00:06.418840 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb9-account-create-f48gz" event={"ID":"aec9b5eb-4e30-4901-bf63-4a4cbd0307cc","Type":"ContainerStarted","Data":"490d1f3a05a940f8bc35b41f29d9c4212784e8c102421cacd8f44c13929cc61b"} Oct 14 09:00:07 crc kubenswrapper[5058]: I1014 09:00:07.927430 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb9-account-create-f48gz" Oct 14 09:00:08 crc kubenswrapper[5058]: I1014 09:00:08.001174 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p99lm\" (UniqueName: \"kubernetes.io/projected/aec9b5eb-4e30-4901-bf63-4a4cbd0307cc-kube-api-access-p99lm\") pod \"aec9b5eb-4e30-4901-bf63-4a4cbd0307cc\" (UID: \"aec9b5eb-4e30-4901-bf63-4a4cbd0307cc\") " Oct 14 09:00:08 crc kubenswrapper[5058]: I1014 09:00:08.007892 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec9b5eb-4e30-4901-bf63-4a4cbd0307cc-kube-api-access-p99lm" (OuterVolumeSpecName: "kube-api-access-p99lm") pod "aec9b5eb-4e30-4901-bf63-4a4cbd0307cc" (UID: "aec9b5eb-4e30-4901-bf63-4a4cbd0307cc"). InnerVolumeSpecName "kube-api-access-p99lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:00:08 crc kubenswrapper[5058]: I1014 09:00:08.104986 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p99lm\" (UniqueName: \"kubernetes.io/projected/aec9b5eb-4e30-4901-bf63-4a4cbd0307cc-kube-api-access-p99lm\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:08 crc kubenswrapper[5058]: I1014 09:00:08.441824 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb9-account-create-f48gz" event={"ID":"aec9b5eb-4e30-4901-bf63-4a4cbd0307cc","Type":"ContainerDied","Data":"490d1f3a05a940f8bc35b41f29d9c4212784e8c102421cacd8f44c13929cc61b"} Oct 14 09:00:08 crc kubenswrapper[5058]: I1014 09:00:08.442063 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="490d1f3a05a940f8bc35b41f29d9c4212784e8c102421cacd8f44c13929cc61b" Oct 14 09:00:08 crc kubenswrapper[5058]: I1014 09:00:08.441898 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb9-account-create-f48gz" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.280724 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-q5qzw"] Oct 14 09:00:10 crc kubenswrapper[5058]: E1014 09:00:10.281645 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec9b5eb-4e30-4901-bf63-4a4cbd0307cc" containerName="mariadb-account-create" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.281671 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec9b5eb-4e30-4901-bf63-4a4cbd0307cc" containerName="mariadb-account-create" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.282000 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec9b5eb-4e30-4901-bf63-4a4cbd0307cc" containerName="mariadb-account-create" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.282838 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.292324 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.292567 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mnrfd" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.294682 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-q5qzw"] Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.357560 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-db-sync-config-data\") pod \"barbican-db-sync-q5qzw\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.357636 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsmqm\" (UniqueName: \"kubernetes.io/projected/79fbca42-6fb5-4dab-8f00-8d466b10974c-kube-api-access-dsmqm\") pod \"barbican-db-sync-q5qzw\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.357888 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-combined-ca-bundle\") pod \"barbican-db-sync-q5qzw\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.459317 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-db-sync-config-data\") pod \"barbican-db-sync-q5qzw\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.459359 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsmqm\" (UniqueName: \"kubernetes.io/projected/79fbca42-6fb5-4dab-8f00-8d466b10974c-kube-api-access-dsmqm\") pod \"barbican-db-sync-q5qzw\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.459440 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-combined-ca-bundle\") pod \"barbican-db-sync-q5qzw\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.465609 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-combined-ca-bundle\") pod \"barbican-db-sync-q5qzw\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.476901 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsmqm\" (UniqueName: \"kubernetes.io/projected/79fbca42-6fb5-4dab-8f00-8d466b10974c-kube-api-access-dsmqm\") pod \"barbican-db-sync-q5qzw\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.477490 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-db-sync-config-data\") pod \"barbican-db-sync-q5qzw\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:10 crc kubenswrapper[5058]: I1014 09:00:10.609003 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:11 crc kubenswrapper[5058]: I1014 09:00:11.107737 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-q5qzw"] Oct 14 09:00:11 crc kubenswrapper[5058]: I1014 09:00:11.476034 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q5qzw" event={"ID":"79fbca42-6fb5-4dab-8f00-8d466b10974c","Type":"ContainerStarted","Data":"fff7111157cd76beec12b52e4656cc1f815214b3a801f655c3dbc93cd1aef09d"} Oct 14 09:00:15 crc kubenswrapper[5058]: I1014 09:00:15.790305 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:00:15 crc kubenswrapper[5058]: E1014 09:00:15.792220 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:00:16 crc kubenswrapper[5058]: I1014 09:00:16.525475 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q5qzw" event={"ID":"79fbca42-6fb5-4dab-8f00-8d466b10974c","Type":"ContainerStarted","Data":"236993a75125ebdfbb21fc3943e8c7d4c26c2f8a27333594a32b89b217274573"} Oct 14 09:00:16 crc kubenswrapper[5058]: I1014 09:00:16.549896 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-q5qzw" podStartSLOduration=2.065156738 podStartE2EDuration="6.549879993s" podCreationTimestamp="2025-10-14 09:00:10 +0000 UTC" firstStartedPulling="2025-10-14 09:00:11.115897588 +0000 UTC m=+7959.026981414" lastFinishedPulling="2025-10-14 09:00:15.600620863 +0000 UTC m=+7963.511704669" observedRunningTime="2025-10-14 09:00:16.546521197 +0000 UTC m=+7964.457605083" watchObservedRunningTime="2025-10-14 09:00:16.549879993 +0000 UTC m=+7964.460963809" Oct 14 09:00:18 crc kubenswrapper[5058]: I1014 09:00:18.553787 5058 generic.go:334] "Generic (PLEG): container finished" podID="79fbca42-6fb5-4dab-8f00-8d466b10974c" containerID="236993a75125ebdfbb21fc3943e8c7d4c26c2f8a27333594a32b89b217274573" exitCode=0 Oct 14 09:00:18 crc kubenswrapper[5058]: I1014 09:00:18.553947 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q5qzw" event={"ID":"79fbca42-6fb5-4dab-8f00-8d466b10974c","Type":"ContainerDied","Data":"236993a75125ebdfbb21fc3943e8c7d4c26c2f8a27333594a32b89b217274573"} Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.107870 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.263194 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsmqm\" (UniqueName: \"kubernetes.io/projected/79fbca42-6fb5-4dab-8f00-8d466b10974c-kube-api-access-dsmqm\") pod \"79fbca42-6fb5-4dab-8f00-8d466b10974c\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.263689 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-db-sync-config-data\") pod \"79fbca42-6fb5-4dab-8f00-8d466b10974c\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.263877 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-combined-ca-bundle\") pod \"79fbca42-6fb5-4dab-8f00-8d466b10974c\" (UID: \"79fbca42-6fb5-4dab-8f00-8d466b10974c\") " Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.271719 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79fbca42-6fb5-4dab-8f00-8d466b10974c-kube-api-access-dsmqm" (OuterVolumeSpecName: "kube-api-access-dsmqm") pod "79fbca42-6fb5-4dab-8f00-8d466b10974c" (UID: "79fbca42-6fb5-4dab-8f00-8d466b10974c"). InnerVolumeSpecName "kube-api-access-dsmqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.271743 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "79fbca42-6fb5-4dab-8f00-8d466b10974c" (UID: "79fbca42-6fb5-4dab-8f00-8d466b10974c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.308761 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79fbca42-6fb5-4dab-8f00-8d466b10974c" (UID: "79fbca42-6fb5-4dab-8f00-8d466b10974c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.366682 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsmqm\" (UniqueName: \"kubernetes.io/projected/79fbca42-6fb5-4dab-8f00-8d466b10974c-kube-api-access-dsmqm\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.366756 5058 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.366783 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fbca42-6fb5-4dab-8f00-8d466b10974c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.579109 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q5qzw" event={"ID":"79fbca42-6fb5-4dab-8f00-8d466b10974c","Type":"ContainerDied","Data":"fff7111157cd76beec12b52e4656cc1f815214b3a801f655c3dbc93cd1aef09d"} Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.579178 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fff7111157cd76beec12b52e4656cc1f815214b3a801f655c3dbc93cd1aef09d" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.579270 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q5qzw" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.895945 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-fb9b5d689-qhv8k"] Oct 14 09:00:20 crc kubenswrapper[5058]: E1014 09:00:20.896398 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79fbca42-6fb5-4dab-8f00-8d466b10974c" containerName="barbican-db-sync" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.896412 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="79fbca42-6fb5-4dab-8f00-8d466b10974c" containerName="barbican-db-sync" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.896633 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="79fbca42-6fb5-4dab-8f00-8d466b10974c" containerName="barbican-db-sync" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.897809 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.901532 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mnrfd" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.901709 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.904747 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.906856 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-fb9b5d689-qhv8k"] Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.927167 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5dfd989b84-vpnc9"] Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.928843 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.932332 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.942858 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5dfd989b84-vpnc9"] Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.971887 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dcd687ff-k84ls"] Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.973373 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.984334 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11c953fc-71cc-44c2-8e95-1219f40a82ff-logs\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.984475 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11c953fc-71cc-44c2-8e95-1219f40a82ff-config-data-custom\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.984516 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c953fc-71cc-44c2-8e95-1219f40a82ff-combined-ca-bundle\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.984539 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11c953fc-71cc-44c2-8e95-1219f40a82ff-config-data\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.984555 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dls5c\" (UniqueName: \"kubernetes.io/projected/11c953fc-71cc-44c2-8e95-1219f40a82ff-kube-api-access-dls5c\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:20 crc kubenswrapper[5058]: I1014 09:00:20.986606 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dcd687ff-k84ls"] Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.052364 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-786cbd49b4-wws7f"] Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.053737 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.057275 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.068590 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-786cbd49b4-wws7f"] Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.086828 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c953fc-71cc-44c2-8e95-1219f40a82ff-combined-ca-bundle\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.086913 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11c953fc-71cc-44c2-8e95-1219f40a82ff-config-data\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.086935 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dls5c\" (UniqueName: \"kubernetes.io/projected/11c953fc-71cc-44c2-8e95-1219f40a82ff-kube-api-access-dls5c\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.088971 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245c0703-016a-4a7c-8bae-f665b29e0c64-combined-ca-bundle\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089046 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2lb7\" (UniqueName: \"kubernetes.io/projected/245c0703-016a-4a7c-8bae-f665b29e0c64-kube-api-access-r2lb7\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089080 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/245c0703-016a-4a7c-8bae-f665b29e0c64-config-data-custom\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089112 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-nb\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089133 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245c0703-016a-4a7c-8bae-f665b29e0c64-logs\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089311 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11c953fc-71cc-44c2-8e95-1219f40a82ff-logs\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089349 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-sb\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089407 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9rrv\" (UniqueName: \"kubernetes.io/projected/06b27e32-eb16-496c-ad5f-cff2d3e75653-kube-api-access-h9rrv\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089559 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-dns-svc\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089660 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-config\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089688 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245c0703-016a-4a7c-8bae-f665b29e0c64-config-data\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.089740 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11c953fc-71cc-44c2-8e95-1219f40a82ff-config-data-custom\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.090003 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11c953fc-71cc-44c2-8e95-1219f40a82ff-logs\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.093611 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11c953fc-71cc-44c2-8e95-1219f40a82ff-config-data-custom\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.093678 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c953fc-71cc-44c2-8e95-1219f40a82ff-combined-ca-bundle\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.094100 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11c953fc-71cc-44c2-8e95-1219f40a82ff-config-data\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.104112 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dls5c\" (UniqueName: \"kubernetes.io/projected/11c953fc-71cc-44c2-8e95-1219f40a82ff-kube-api-access-dls5c\") pod \"barbican-worker-fb9b5d689-qhv8k\" (UID: \"11c953fc-71cc-44c2-8e95-1219f40a82ff\") " pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191341 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-config\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191412 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245c0703-016a-4a7c-8bae-f665b29e0c64-config-data\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191531 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245c0703-016a-4a7c-8bae-f665b29e0c64-combined-ca-bundle\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191584 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6858f89d-6b3d-4388-a644-7a62e04bc9ae-logs\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191611 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2lb7\" (UniqueName: \"kubernetes.io/projected/245c0703-016a-4a7c-8bae-f665b29e0c64-kube-api-access-r2lb7\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191636 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6858f89d-6b3d-4388-a644-7a62e04bc9ae-combined-ca-bundle\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191654 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/245c0703-016a-4a7c-8bae-f665b29e0c64-config-data-custom\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191713 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-nb\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191733 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245c0703-016a-4a7c-8bae-f665b29e0c64-logs\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191748 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6858f89d-6b3d-4388-a644-7a62e04bc9ae-config-data-custom\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191782 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-sb\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191818 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6858f89d-6b3d-4388-a644-7a62e04bc9ae-config-data\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191840 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9rrv\" (UniqueName: \"kubernetes.io/projected/06b27e32-eb16-496c-ad5f-cff2d3e75653-kube-api-access-h9rrv\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191870 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-dns-svc\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.191889 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z468c\" (UniqueName: \"kubernetes.io/projected/6858f89d-6b3d-4388-a644-7a62e04bc9ae-kube-api-access-z468c\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.192202 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-config\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.192296 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245c0703-016a-4a7c-8bae-f665b29e0c64-logs\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.192964 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-sb\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.193616 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-dns-svc\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.193815 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-nb\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.195420 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/245c0703-016a-4a7c-8bae-f665b29e0c64-config-data-custom\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.196762 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245c0703-016a-4a7c-8bae-f665b29e0c64-config-data\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.198920 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245c0703-016a-4a7c-8bae-f665b29e0c64-combined-ca-bundle\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.211308 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2lb7\" (UniqueName: \"kubernetes.io/projected/245c0703-016a-4a7c-8bae-f665b29e0c64-kube-api-access-r2lb7\") pod \"barbican-keystone-listener-5dfd989b84-vpnc9\" (UID: \"245c0703-016a-4a7c-8bae-f665b29e0c64\") " pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.211440 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9rrv\" (UniqueName: \"kubernetes.io/projected/06b27e32-eb16-496c-ad5f-cff2d3e75653-kube-api-access-h9rrv\") pod \"dnsmasq-dns-7dcd687ff-k84ls\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.223926 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-fb9b5d689-qhv8k" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.250283 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.293918 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6858f89d-6b3d-4388-a644-7a62e04bc9ae-config-data\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.294112 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z468c\" (UniqueName: \"kubernetes.io/projected/6858f89d-6b3d-4388-a644-7a62e04bc9ae-kube-api-access-z468c\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.294293 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6858f89d-6b3d-4388-a644-7a62e04bc9ae-logs\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.297666 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6858f89d-6b3d-4388-a644-7a62e04bc9ae-logs\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.297851 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6858f89d-6b3d-4388-a644-7a62e04bc9ae-combined-ca-bundle\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.298278 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6858f89d-6b3d-4388-a644-7a62e04bc9ae-config-data-custom\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.299489 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.301421 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6858f89d-6b3d-4388-a644-7a62e04bc9ae-combined-ca-bundle\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.301677 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6858f89d-6b3d-4388-a644-7a62e04bc9ae-config-data-custom\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.301728 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6858f89d-6b3d-4388-a644-7a62e04bc9ae-config-data\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.311595 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z468c\" (UniqueName: \"kubernetes.io/projected/6858f89d-6b3d-4388-a644-7a62e04bc9ae-kube-api-access-z468c\") pod \"barbican-api-786cbd49b4-wws7f\" (UID: \"6858f89d-6b3d-4388-a644-7a62e04bc9ae\") " pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.383056 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.701644 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5dfd989b84-vpnc9"] Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.855725 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-fb9b5d689-qhv8k"] Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.965923 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-786cbd49b4-wws7f"] Oct 14 09:00:21 crc kubenswrapper[5058]: I1014 09:00:21.970320 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dcd687ff-k84ls"] Oct 14 09:00:21 crc kubenswrapper[5058]: W1014 09:00:21.974684 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6858f89d_6b3d_4388_a644_7a62e04bc9ae.slice/crio-301cb6fe56738a9b437e3a083d00dbf74ca30aea353a9b8976ab2e3d9ecfe2bb WatchSource:0}: Error finding container 301cb6fe56738a9b437e3a083d00dbf74ca30aea353a9b8976ab2e3d9ecfe2bb: Status 404 returned error can't find the container with id 301cb6fe56738a9b437e3a083d00dbf74ca30aea353a9b8976ab2e3d9ecfe2bb Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.599149 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786cbd49b4-wws7f" event={"ID":"6858f89d-6b3d-4388-a644-7a62e04bc9ae","Type":"ContainerStarted","Data":"e6ecc4808524232e06848e698858a1157cbb812df1dfcfd54a82e99021935c8a"} Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.599541 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786cbd49b4-wws7f" event={"ID":"6858f89d-6b3d-4388-a644-7a62e04bc9ae","Type":"ContainerStarted","Data":"9d7f98414145d01ee13cda1538282e9bf04f183585902b130e039ceeec0e6434"} Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.599561 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.599571 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786cbd49b4-wws7f" event={"ID":"6858f89d-6b3d-4388-a644-7a62e04bc9ae","Type":"ContainerStarted","Data":"301cb6fe56738a9b437e3a083d00dbf74ca30aea353a9b8976ab2e3d9ecfe2bb"} Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.599583 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.602757 5058 generic.go:334] "Generic (PLEG): container finished" podID="06b27e32-eb16-496c-ad5f-cff2d3e75653" containerID="4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f" exitCode=0 Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.602837 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" event={"ID":"06b27e32-eb16-496c-ad5f-cff2d3e75653","Type":"ContainerDied","Data":"4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f"} Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.602876 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" event={"ID":"06b27e32-eb16-496c-ad5f-cff2d3e75653","Type":"ContainerStarted","Data":"f45c5caf6cfd099bd92f8beb9573da788cf7b1f57c1323e4240148a02bb13ea5"} Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.605523 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fb9b5d689-qhv8k" event={"ID":"11c953fc-71cc-44c2-8e95-1219f40a82ff","Type":"ContainerStarted","Data":"07b75e4db2ee576c41d7cbffb392d6919a470d78c06877d34784cb61a27a3847"} Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.619707 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-786cbd49b4-wws7f" podStartSLOduration=1.619687799 podStartE2EDuration="1.619687799s" podCreationTimestamp="2025-10-14 09:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:00:22.618615538 +0000 UTC m=+7970.529699374" watchObservedRunningTime="2025-10-14 09:00:22.619687799 +0000 UTC m=+7970.530771615" Oct 14 09:00:22 crc kubenswrapper[5058]: I1014 09:00:22.628048 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" event={"ID":"245c0703-016a-4a7c-8bae-f665b29e0c64","Type":"ContainerStarted","Data":"95a818fbb6c97db3abbd0cf1abe04c7403a694e2236b35aadea31a4bef99e6e2"} Oct 14 09:00:23 crc kubenswrapper[5058]: I1014 09:00:23.637509 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fb9b5d689-qhv8k" event={"ID":"11c953fc-71cc-44c2-8e95-1219f40a82ff","Type":"ContainerStarted","Data":"b41ca6d880eee6f7c882093f66ccde859d58a385c3b96d1bb19a53a409de2c65"} Oct 14 09:00:23 crc kubenswrapper[5058]: I1014 09:00:23.637899 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fb9b5d689-qhv8k" event={"ID":"11c953fc-71cc-44c2-8e95-1219f40a82ff","Type":"ContainerStarted","Data":"6059ce56d197a156510f6208c7ea720b11454d58686d3e9f82c05fb496f4b510"} Oct 14 09:00:23 crc kubenswrapper[5058]: I1014 09:00:23.640045 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" event={"ID":"245c0703-016a-4a7c-8bae-f665b29e0c64","Type":"ContainerStarted","Data":"55bcb96c5154e5c1c28359883d121d01b0e134ccc44f0e5b50ccd10d40b07359"} Oct 14 09:00:23 crc kubenswrapper[5058]: I1014 09:00:23.640089 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" event={"ID":"245c0703-016a-4a7c-8bae-f665b29e0c64","Type":"ContainerStarted","Data":"93adca231e3416e67c04a027d90f8ca2085725b930df534dd6097689e7195b21"} Oct 14 09:00:23 crc kubenswrapper[5058]: I1014 09:00:23.641981 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" event={"ID":"06b27e32-eb16-496c-ad5f-cff2d3e75653","Type":"ContainerStarted","Data":"13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb"} Oct 14 09:00:23 crc kubenswrapper[5058]: I1014 09:00:23.659366 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-fb9b5d689-qhv8k" podStartSLOduration=2.572052176 podStartE2EDuration="3.659342133s" podCreationTimestamp="2025-10-14 09:00:20 +0000 UTC" firstStartedPulling="2025-10-14 09:00:21.866145253 +0000 UTC m=+7969.777229079" lastFinishedPulling="2025-10-14 09:00:22.95343522 +0000 UTC m=+7970.864519036" observedRunningTime="2025-10-14 09:00:23.653342292 +0000 UTC m=+7971.564426108" watchObservedRunningTime="2025-10-14 09:00:23.659342133 +0000 UTC m=+7971.570425939" Oct 14 09:00:23 crc kubenswrapper[5058]: I1014 09:00:23.675416 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5dfd989b84-vpnc9" podStartSLOduration=2.435166412 podStartE2EDuration="3.675399142s" podCreationTimestamp="2025-10-14 09:00:20 +0000 UTC" firstStartedPulling="2025-10-14 09:00:21.712728627 +0000 UTC m=+7969.623812433" lastFinishedPulling="2025-10-14 09:00:22.952961357 +0000 UTC m=+7970.864045163" observedRunningTime="2025-10-14 09:00:23.674167807 +0000 UTC m=+7971.585251633" watchObservedRunningTime="2025-10-14 09:00:23.675399142 +0000 UTC m=+7971.586482948" Oct 14 09:00:23 crc kubenswrapper[5058]: I1014 09:00:23.693239 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" podStartSLOduration=3.6932195119999998 podStartE2EDuration="3.693219512s" podCreationTimestamp="2025-10-14 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:00:23.691892914 +0000 UTC m=+7971.602976750" watchObservedRunningTime="2025-10-14 09:00:23.693219512 +0000 UTC m=+7971.604303318" Oct 14 09:00:24 crc kubenswrapper[5058]: I1014 09:00:24.651485 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:30 crc kubenswrapper[5058]: I1014 09:00:30.791919 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:00:30 crc kubenswrapper[5058]: E1014 09:00:30.792942 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:00:31 crc kubenswrapper[5058]: I1014 09:00:31.301884 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:00:31 crc kubenswrapper[5058]: I1014 09:00:31.401117 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d6d6fbcf-lrxw8"] Oct 14 09:00:31 crc kubenswrapper[5058]: I1014 09:00:31.401775 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" podUID="bbee7133-91ad-4fd0-9bb8-498a86d322e1" containerName="dnsmasq-dns" containerID="cri-o://c242c20aa903d8b345f37a83016ab71aa82dd8b8e96611bd1a039ce7e0654c1a" gracePeriod=10 Oct 14 09:00:31 crc kubenswrapper[5058]: I1014 09:00:31.754834 5058 generic.go:334] "Generic (PLEG): container finished" podID="bbee7133-91ad-4fd0-9bb8-498a86d322e1" containerID="c242c20aa903d8b345f37a83016ab71aa82dd8b8e96611bd1a039ce7e0654c1a" exitCode=0 Oct 14 09:00:31 crc kubenswrapper[5058]: I1014 09:00:31.755000 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" event={"ID":"bbee7133-91ad-4fd0-9bb8-498a86d322e1","Type":"ContainerDied","Data":"c242c20aa903d8b345f37a83016ab71aa82dd8b8e96611bd1a039ce7e0654c1a"} Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.054347 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.217071 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-nb\") pod \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.217694 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-sb\") pod \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.217734 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-dns-svc\") pod \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.217760 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2psvw\" (UniqueName: \"kubernetes.io/projected/bbee7133-91ad-4fd0-9bb8-498a86d322e1-kube-api-access-2psvw\") pod \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.218005 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-config\") pod \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\" (UID: \"bbee7133-91ad-4fd0-9bb8-498a86d322e1\") " Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.225261 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbee7133-91ad-4fd0-9bb8-498a86d322e1-kube-api-access-2psvw" (OuterVolumeSpecName: "kube-api-access-2psvw") pod "bbee7133-91ad-4fd0-9bb8-498a86d322e1" (UID: "bbee7133-91ad-4fd0-9bb8-498a86d322e1"). InnerVolumeSpecName "kube-api-access-2psvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.270349 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbee7133-91ad-4fd0-9bb8-498a86d322e1" (UID: "bbee7133-91ad-4fd0-9bb8-498a86d322e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.272027 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbee7133-91ad-4fd0-9bb8-498a86d322e1" (UID: "bbee7133-91ad-4fd0-9bb8-498a86d322e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.273452 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bbee7133-91ad-4fd0-9bb8-498a86d322e1" (UID: "bbee7133-91ad-4fd0-9bb8-498a86d322e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.289697 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-config" (OuterVolumeSpecName: "config") pod "bbee7133-91ad-4fd0-9bb8-498a86d322e1" (UID: "bbee7133-91ad-4fd0-9bb8-498a86d322e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.320665 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.320722 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.320736 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.320749 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbee7133-91ad-4fd0-9bb8-498a86d322e1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.320777 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2psvw\" (UniqueName: \"kubernetes.io/projected/bbee7133-91ad-4fd0-9bb8-498a86d322e1-kube-api-access-2psvw\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.776745 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" event={"ID":"bbee7133-91ad-4fd0-9bb8-498a86d322e1","Type":"ContainerDied","Data":"44b548dfa7db4f009fde5bc87027911a082362ddb819a3a74fb426a96141edb7"} Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.776837 5058 scope.go:117] "RemoveContainer" containerID="c242c20aa903d8b345f37a83016ab71aa82dd8b8e96611bd1a039ce7e0654c1a" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.776851 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d6d6fbcf-lrxw8" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.813670 5058 scope.go:117] "RemoveContainer" containerID="650f6dfb7c8f882d31f318841aaedc09332942153903326f445b8fbbb0fd406d" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.831958 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.850371 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d6d6fbcf-lrxw8"] Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.869497 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67d6d6fbcf-lrxw8"] Oct 14 09:00:32 crc kubenswrapper[5058]: I1014 09:00:32.955864 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-786cbd49b4-wws7f" Oct 14 09:00:34 crc kubenswrapper[5058]: I1014 09:00:34.811233 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbee7133-91ad-4fd0-9bb8-498a86d322e1" path="/var/lib/kubelet/pods/bbee7133-91ad-4fd0-9bb8-498a86d322e1/volumes" Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.462020 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dkzb5"] Oct 14 09:00:40 crc kubenswrapper[5058]: E1014 09:00:40.463044 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbee7133-91ad-4fd0-9bb8-498a86d322e1" containerName="dnsmasq-dns" Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.463065 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbee7133-91ad-4fd0-9bb8-498a86d322e1" containerName="dnsmasq-dns" Oct 14 09:00:40 crc kubenswrapper[5058]: E1014 09:00:40.463079 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbee7133-91ad-4fd0-9bb8-498a86d322e1" containerName="init" Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.463086 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbee7133-91ad-4fd0-9bb8-498a86d322e1" containerName="init" Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.463309 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbee7133-91ad-4fd0-9bb8-498a86d322e1" containerName="dnsmasq-dns" Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.464045 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dkzb5" Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.470149 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dkzb5"] Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.571611 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsh94\" (UniqueName: \"kubernetes.io/projected/1a9fbd0f-c4b6-45db-b0d8-e0240f276834-kube-api-access-vsh94\") pod \"neutron-db-create-dkzb5\" (UID: \"1a9fbd0f-c4b6-45db-b0d8-e0240f276834\") " pod="openstack/neutron-db-create-dkzb5" Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.673159 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsh94\" (UniqueName: \"kubernetes.io/projected/1a9fbd0f-c4b6-45db-b0d8-e0240f276834-kube-api-access-vsh94\") pod \"neutron-db-create-dkzb5\" (UID: \"1a9fbd0f-c4b6-45db-b0d8-e0240f276834\") " pod="openstack/neutron-db-create-dkzb5" Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.694636 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsh94\" (UniqueName: \"kubernetes.io/projected/1a9fbd0f-c4b6-45db-b0d8-e0240f276834-kube-api-access-vsh94\") pod \"neutron-db-create-dkzb5\" (UID: \"1a9fbd0f-c4b6-45db-b0d8-e0240f276834\") " pod="openstack/neutron-db-create-dkzb5" Oct 14 09:00:40 crc kubenswrapper[5058]: I1014 09:00:40.790934 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dkzb5" Oct 14 09:00:41 crc kubenswrapper[5058]: I1014 09:00:41.298598 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dkzb5"] Oct 14 09:00:41 crc kubenswrapper[5058]: I1014 09:00:41.884762 5058 generic.go:334] "Generic (PLEG): container finished" podID="1a9fbd0f-c4b6-45db-b0d8-e0240f276834" containerID="349b5abb6bd73993447989e751814842dad8345ff132bc3a44ca231f779cca36" exitCode=0 Oct 14 09:00:41 crc kubenswrapper[5058]: I1014 09:00:41.884887 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dkzb5" event={"ID":"1a9fbd0f-c4b6-45db-b0d8-e0240f276834","Type":"ContainerDied","Data":"349b5abb6bd73993447989e751814842dad8345ff132bc3a44ca231f779cca36"} Oct 14 09:00:41 crc kubenswrapper[5058]: I1014 09:00:41.885159 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dkzb5" event={"ID":"1a9fbd0f-c4b6-45db-b0d8-e0240f276834","Type":"ContainerStarted","Data":"6aa83db4b2f39d1563b6cf07913883013a235070487dcdabf85facdb0fd7ca1d"} Oct 14 09:00:42 crc kubenswrapper[5058]: I1014 09:00:42.072488 5058 scope.go:117] "RemoveContainer" containerID="e303ecc5160d1a6833956807f74976e2ebb54d653d5b5ef79d083123cb1950ba" Oct 14 09:00:43 crc kubenswrapper[5058]: I1014 09:00:43.279816 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dkzb5" Oct 14 09:00:43 crc kubenswrapper[5058]: I1014 09:00:43.327045 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsh94\" (UniqueName: \"kubernetes.io/projected/1a9fbd0f-c4b6-45db-b0d8-e0240f276834-kube-api-access-vsh94\") pod \"1a9fbd0f-c4b6-45db-b0d8-e0240f276834\" (UID: \"1a9fbd0f-c4b6-45db-b0d8-e0240f276834\") " Oct 14 09:00:43 crc kubenswrapper[5058]: I1014 09:00:43.332949 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9fbd0f-c4b6-45db-b0d8-e0240f276834-kube-api-access-vsh94" (OuterVolumeSpecName: "kube-api-access-vsh94") pod "1a9fbd0f-c4b6-45db-b0d8-e0240f276834" (UID: "1a9fbd0f-c4b6-45db-b0d8-e0240f276834"). InnerVolumeSpecName "kube-api-access-vsh94". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:00:43 crc kubenswrapper[5058]: I1014 09:00:43.430402 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsh94\" (UniqueName: \"kubernetes.io/projected/1a9fbd0f-c4b6-45db-b0d8-e0240f276834-kube-api-access-vsh94\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:43 crc kubenswrapper[5058]: I1014 09:00:43.910041 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dkzb5" event={"ID":"1a9fbd0f-c4b6-45db-b0d8-e0240f276834","Type":"ContainerDied","Data":"6aa83db4b2f39d1563b6cf07913883013a235070487dcdabf85facdb0fd7ca1d"} Oct 14 09:00:43 crc kubenswrapper[5058]: I1014 09:00:43.910525 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa83db4b2f39d1563b6cf07913883013a235070487dcdabf85facdb0fd7ca1d" Oct 14 09:00:43 crc kubenswrapper[5058]: I1014 09:00:43.910127 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dkzb5" Oct 14 09:00:44 crc kubenswrapper[5058]: I1014 09:00:44.790715 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:00:44 crc kubenswrapper[5058]: E1014 09:00:44.791197 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.587696 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0feb-account-create-p6d7p"] Oct 14 09:00:50 crc kubenswrapper[5058]: E1014 09:00:50.588450 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9fbd0f-c4b6-45db-b0d8-e0240f276834" containerName="mariadb-database-create" Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.588473 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9fbd0f-c4b6-45db-b0d8-e0240f276834" containerName="mariadb-database-create" Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.588651 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9fbd0f-c4b6-45db-b0d8-e0240f276834" containerName="mariadb-database-create" Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.589403 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0feb-account-create-p6d7p" Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.592245 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.598227 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0feb-account-create-p6d7p"] Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.678719 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4dmc\" (UniqueName: \"kubernetes.io/projected/3682e492-8b90-4e55-b095-b6af5cb7c3e5-kube-api-access-h4dmc\") pod \"neutron-0feb-account-create-p6d7p\" (UID: \"3682e492-8b90-4e55-b095-b6af5cb7c3e5\") " pod="openstack/neutron-0feb-account-create-p6d7p" Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.780189 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4dmc\" (UniqueName: \"kubernetes.io/projected/3682e492-8b90-4e55-b095-b6af5cb7c3e5-kube-api-access-h4dmc\") pod \"neutron-0feb-account-create-p6d7p\" (UID: \"3682e492-8b90-4e55-b095-b6af5cb7c3e5\") " pod="openstack/neutron-0feb-account-create-p6d7p" Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.811104 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4dmc\" (UniqueName: \"kubernetes.io/projected/3682e492-8b90-4e55-b095-b6af5cb7c3e5-kube-api-access-h4dmc\") pod \"neutron-0feb-account-create-p6d7p\" (UID: \"3682e492-8b90-4e55-b095-b6af5cb7c3e5\") " pod="openstack/neutron-0feb-account-create-p6d7p" Oct 14 09:00:50 crc kubenswrapper[5058]: I1014 09:00:50.923676 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0feb-account-create-p6d7p" Oct 14 09:00:51 crc kubenswrapper[5058]: I1014 09:00:51.424186 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0feb-account-create-p6d7p"] Oct 14 09:00:52 crc kubenswrapper[5058]: I1014 09:00:52.001151 5058 generic.go:334] "Generic (PLEG): container finished" podID="3682e492-8b90-4e55-b095-b6af5cb7c3e5" containerID="1289ff511a077089dde8e7317484106ffd4bf2a18974fae848317a11c511d497" exitCode=0 Oct 14 09:00:52 crc kubenswrapper[5058]: I1014 09:00:52.001227 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0feb-account-create-p6d7p" event={"ID":"3682e492-8b90-4e55-b095-b6af5cb7c3e5","Type":"ContainerDied","Data":"1289ff511a077089dde8e7317484106ffd4bf2a18974fae848317a11c511d497"} Oct 14 09:00:52 crc kubenswrapper[5058]: I1014 09:00:52.002759 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0feb-account-create-p6d7p" event={"ID":"3682e492-8b90-4e55-b095-b6af5cb7c3e5","Type":"ContainerStarted","Data":"3b3147dc2fff5b8ec603fcfa5f4e6b31e5d80a70fd0bead8b720e5c55631e087"} Oct 14 09:00:53 crc kubenswrapper[5058]: I1014 09:00:53.425566 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0feb-account-create-p6d7p" Oct 14 09:00:53 crc kubenswrapper[5058]: I1014 09:00:53.541790 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4dmc\" (UniqueName: \"kubernetes.io/projected/3682e492-8b90-4e55-b095-b6af5cb7c3e5-kube-api-access-h4dmc\") pod \"3682e492-8b90-4e55-b095-b6af5cb7c3e5\" (UID: \"3682e492-8b90-4e55-b095-b6af5cb7c3e5\") " Oct 14 09:00:53 crc kubenswrapper[5058]: I1014 09:00:53.549134 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3682e492-8b90-4e55-b095-b6af5cb7c3e5-kube-api-access-h4dmc" (OuterVolumeSpecName: "kube-api-access-h4dmc") pod "3682e492-8b90-4e55-b095-b6af5cb7c3e5" (UID: "3682e492-8b90-4e55-b095-b6af5cb7c3e5"). InnerVolumeSpecName "kube-api-access-h4dmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:00:53 crc kubenswrapper[5058]: I1014 09:00:53.644245 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4dmc\" (UniqueName: \"kubernetes.io/projected/3682e492-8b90-4e55-b095-b6af5cb7c3e5-kube-api-access-h4dmc\") on node \"crc\" DevicePath \"\"" Oct 14 09:00:54 crc kubenswrapper[5058]: I1014 09:00:54.049622 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0feb-account-create-p6d7p" event={"ID":"3682e492-8b90-4e55-b095-b6af5cb7c3e5","Type":"ContainerDied","Data":"3b3147dc2fff5b8ec603fcfa5f4e6b31e5d80a70fd0bead8b720e5c55631e087"} Oct 14 09:00:54 crc kubenswrapper[5058]: I1014 09:00:54.049668 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b3147dc2fff5b8ec603fcfa5f4e6b31e5d80a70fd0bead8b720e5c55631e087" Oct 14 09:00:54 crc kubenswrapper[5058]: I1014 09:00:54.049791 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0feb-account-create-p6d7p" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.770125 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4v8rq"] Oct 14 09:00:55 crc kubenswrapper[5058]: E1014 09:00:55.770891 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3682e492-8b90-4e55-b095-b6af5cb7c3e5" containerName="mariadb-account-create" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.770908 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3682e492-8b90-4e55-b095-b6af5cb7c3e5" containerName="mariadb-account-create" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.771203 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3682e492-8b90-4e55-b095-b6af5cb7c3e5" containerName="mariadb-account-create" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.772013 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.774381 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.774591 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.774686 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-s8j5r" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.779754 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4v8rq"] Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.885310 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-config\") pod \"neutron-db-sync-4v8rq\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.885554 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvrcg\" (UniqueName: \"kubernetes.io/projected/400309c2-becd-4fa9-ab65-fe1a18c3011e-kube-api-access-bvrcg\") pod \"neutron-db-sync-4v8rq\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.885612 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-combined-ca-bundle\") pod \"neutron-db-sync-4v8rq\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.987076 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-config\") pod \"neutron-db-sync-4v8rq\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.987168 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvrcg\" (UniqueName: \"kubernetes.io/projected/400309c2-becd-4fa9-ab65-fe1a18c3011e-kube-api-access-bvrcg\") pod \"neutron-db-sync-4v8rq\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.987196 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-combined-ca-bundle\") pod \"neutron-db-sync-4v8rq\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.993682 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-config\") pod \"neutron-db-sync-4v8rq\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:55 crc kubenswrapper[5058]: I1014 09:00:55.993954 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-combined-ca-bundle\") pod \"neutron-db-sync-4v8rq\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:56 crc kubenswrapper[5058]: I1014 09:00:56.013483 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvrcg\" (UniqueName: \"kubernetes.io/projected/400309c2-becd-4fa9-ab65-fe1a18c3011e-kube-api-access-bvrcg\") pod \"neutron-db-sync-4v8rq\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:56 crc kubenswrapper[5058]: I1014 09:00:56.089535 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:00:56 crc kubenswrapper[5058]: I1014 09:00:56.706252 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4v8rq"] Oct 14 09:00:56 crc kubenswrapper[5058]: W1014 09:00:56.707420 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod400309c2_becd_4fa9_ab65_fe1a18c3011e.slice/crio-cef6faeb13ef84a711d01c962a937c2cba132e4da626ffe11fe037efebbec9f8 WatchSource:0}: Error finding container cef6faeb13ef84a711d01c962a937c2cba132e4da626ffe11fe037efebbec9f8: Status 404 returned error can't find the container with id cef6faeb13ef84a711d01c962a937c2cba132e4da626ffe11fe037efebbec9f8 Oct 14 09:00:57 crc kubenswrapper[5058]: I1014 09:00:57.079532 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4v8rq" event={"ID":"400309c2-becd-4fa9-ab65-fe1a18c3011e","Type":"ContainerStarted","Data":"92b6a5f7af34cb7d00fc011bd64e68044c377b710cdb4989041059e9eb2ae08e"} Oct 14 09:00:57 crc kubenswrapper[5058]: I1014 09:00:57.082203 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4v8rq" event={"ID":"400309c2-becd-4fa9-ab65-fe1a18c3011e","Type":"ContainerStarted","Data":"cef6faeb13ef84a711d01c962a937c2cba132e4da626ffe11fe037efebbec9f8"} Oct 14 09:00:57 crc kubenswrapper[5058]: I1014 09:00:57.098925 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4v8rq" podStartSLOduration=2.09890874 podStartE2EDuration="2.09890874s" podCreationTimestamp="2025-10-14 09:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:00:57.094115753 +0000 UTC m=+8005.005199569" watchObservedRunningTime="2025-10-14 09:00:57.09890874 +0000 UTC m=+8005.009992546" Oct 14 09:00:59 crc kubenswrapper[5058]: I1014 09:00:59.790588 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:00:59 crc kubenswrapper[5058]: E1014 09:00:59.791367 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.155028 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29340541-fmsll"] Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.157238 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.165086 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340541-fmsll"] Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.177712 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-combined-ca-bundle\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.177839 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-fernet-keys\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.177886 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-config-data\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.177961 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnbxm\" (UniqueName: \"kubernetes.io/projected/11a1a79e-ed50-405d-aada-2b4b6a280c98-kube-api-access-hnbxm\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.278786 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-combined-ca-bundle\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.278898 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-fernet-keys\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.278941 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-config-data\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.278998 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnbxm\" (UniqueName: \"kubernetes.io/projected/11a1a79e-ed50-405d-aada-2b4b6a280c98-kube-api-access-hnbxm\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.289082 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-fernet-keys\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.289698 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-combined-ca-bundle\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.295637 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-config-data\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.307484 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnbxm\" (UniqueName: \"kubernetes.io/projected/11a1a79e-ed50-405d-aada-2b4b6a280c98-kube-api-access-hnbxm\") pod \"keystone-cron-29340541-fmsll\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.485470 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:00 crc kubenswrapper[5058]: I1014 09:01:00.985882 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340541-fmsll"] Oct 14 09:01:00 crc kubenswrapper[5058]: W1014 09:01:00.990234 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11a1a79e_ed50_405d_aada_2b4b6a280c98.slice/crio-d30ff0591af76c177ca3e8b7ac58a2f8e3675ede600003e91d94f7847c39542f WatchSource:0}: Error finding container d30ff0591af76c177ca3e8b7ac58a2f8e3675ede600003e91d94f7847c39542f: Status 404 returned error can't find the container with id d30ff0591af76c177ca3e8b7ac58a2f8e3675ede600003e91d94f7847c39542f Oct 14 09:01:01 crc kubenswrapper[5058]: I1014 09:01:01.128857 5058 generic.go:334] "Generic (PLEG): container finished" podID="400309c2-becd-4fa9-ab65-fe1a18c3011e" containerID="92b6a5f7af34cb7d00fc011bd64e68044c377b710cdb4989041059e9eb2ae08e" exitCode=0 Oct 14 09:01:01 crc kubenswrapper[5058]: I1014 09:01:01.128953 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4v8rq" event={"ID":"400309c2-becd-4fa9-ab65-fe1a18c3011e","Type":"ContainerDied","Data":"92b6a5f7af34cb7d00fc011bd64e68044c377b710cdb4989041059e9eb2ae08e"} Oct 14 09:01:01 crc kubenswrapper[5058]: I1014 09:01:01.131405 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340541-fmsll" event={"ID":"11a1a79e-ed50-405d-aada-2b4b6a280c98","Type":"ContainerStarted","Data":"d30ff0591af76c177ca3e8b7ac58a2f8e3675ede600003e91d94f7847c39542f"} Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.139649 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340541-fmsll" event={"ID":"11a1a79e-ed50-405d-aada-2b4b6a280c98","Type":"ContainerStarted","Data":"5552267fc0115066ca594e3b300ddb5f9f6ce4b3ba9d71ad1b179179e265e859"} Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.169569 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29340541-fmsll" podStartSLOduration=2.169544657 podStartE2EDuration="2.169544657s" podCreationTimestamp="2025-10-14 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:01:02.164023359 +0000 UTC m=+8010.075107185" watchObservedRunningTime="2025-10-14 09:01:02.169544657 +0000 UTC m=+8010.080628463" Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.537109 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.728901 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-combined-ca-bundle\") pod \"400309c2-becd-4fa9-ab65-fe1a18c3011e\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.729022 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvrcg\" (UniqueName: \"kubernetes.io/projected/400309c2-becd-4fa9-ab65-fe1a18c3011e-kube-api-access-bvrcg\") pod \"400309c2-becd-4fa9-ab65-fe1a18c3011e\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.729343 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-config\") pod \"400309c2-becd-4fa9-ab65-fe1a18c3011e\" (UID: \"400309c2-becd-4fa9-ab65-fe1a18c3011e\") " Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.738661 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400309c2-becd-4fa9-ab65-fe1a18c3011e-kube-api-access-bvrcg" (OuterVolumeSpecName: "kube-api-access-bvrcg") pod "400309c2-becd-4fa9-ab65-fe1a18c3011e" (UID: "400309c2-becd-4fa9-ab65-fe1a18c3011e"). InnerVolumeSpecName "kube-api-access-bvrcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.753444 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-config" (OuterVolumeSpecName: "config") pod "400309c2-becd-4fa9-ab65-fe1a18c3011e" (UID: "400309c2-becd-4fa9-ab65-fe1a18c3011e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.770032 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "400309c2-becd-4fa9-ab65-fe1a18c3011e" (UID: "400309c2-becd-4fa9-ab65-fe1a18c3011e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.831844 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.832160 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvrcg\" (UniqueName: \"kubernetes.io/projected/400309c2-becd-4fa9-ab65-fe1a18c3011e-kube-api-access-bvrcg\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:02 crc kubenswrapper[5058]: I1014 09:01:02.832182 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/400309c2-becd-4fa9-ab65-fe1a18c3011e-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.151739 5058 generic.go:334] "Generic (PLEG): container finished" podID="11a1a79e-ed50-405d-aada-2b4b6a280c98" containerID="5552267fc0115066ca594e3b300ddb5f9f6ce4b3ba9d71ad1b179179e265e859" exitCode=0 Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.151842 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340541-fmsll" event={"ID":"11a1a79e-ed50-405d-aada-2b4b6a280c98","Type":"ContainerDied","Data":"5552267fc0115066ca594e3b300ddb5f9f6ce4b3ba9d71ad1b179179e265e859"} Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.154772 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4v8rq" event={"ID":"400309c2-becd-4fa9-ab65-fe1a18c3011e","Type":"ContainerDied","Data":"cef6faeb13ef84a711d01c962a937c2cba132e4da626ffe11fe037efebbec9f8"} Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.154852 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef6faeb13ef84a711d01c962a937c2cba132e4da626ffe11fe037efebbec9f8" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.154956 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4v8rq" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.295672 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65cccc65ff-vdh9w"] Oct 14 09:01:03 crc kubenswrapper[5058]: E1014 09:01:03.296368 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400309c2-becd-4fa9-ab65-fe1a18c3011e" containerName="neutron-db-sync" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.296480 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="400309c2-becd-4fa9-ab65-fe1a18c3011e" containerName="neutron-db-sync" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.296761 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="400309c2-becd-4fa9-ab65-fe1a18c3011e" containerName="neutron-db-sync" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.302452 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.325410 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65cccc65ff-vdh9w"] Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.395913 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7df8c6fbcc-lbcbf"] Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.397468 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.403940 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-s8j5r" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.406367 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.407057 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.410409 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7df8c6fbcc-lbcbf"] Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.447124 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-nb\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.447218 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-dns-svc\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.447258 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-config\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.447308 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-sb\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.447369 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pz46\" (UniqueName: \"kubernetes.io/projected/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-kube-api-access-4pz46\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.549113 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-nb\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.549189 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-dns-svc\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.549217 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1e202a6-e40e-4322-b8b3-5d4594b1023e-config\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.549238 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-config\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.549271 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-sb\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.549290 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1e202a6-e40e-4322-b8b3-5d4594b1023e-httpd-config\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.549332 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pz46\" (UniqueName: \"kubernetes.io/projected/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-kube-api-access-4pz46\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.549351 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmzrc\" (UniqueName: \"kubernetes.io/projected/e1e202a6-e40e-4322-b8b3-5d4594b1023e-kube-api-access-pmzrc\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.549367 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e202a6-e40e-4322-b8b3-5d4594b1023e-combined-ca-bundle\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.550518 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-nb\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.550691 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-sb\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.551073 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-config\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.551859 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-dns-svc\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.570409 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pz46\" (UniqueName: \"kubernetes.io/projected/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-kube-api-access-4pz46\") pod \"dnsmasq-dns-65cccc65ff-vdh9w\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.650782 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmzrc\" (UniqueName: \"kubernetes.io/projected/e1e202a6-e40e-4322-b8b3-5d4594b1023e-kube-api-access-pmzrc\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.651126 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e202a6-e40e-4322-b8b3-5d4594b1023e-combined-ca-bundle\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.651250 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1e202a6-e40e-4322-b8b3-5d4594b1023e-config\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.651299 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1e202a6-e40e-4322-b8b3-5d4594b1023e-httpd-config\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.655094 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e202a6-e40e-4322-b8b3-5d4594b1023e-combined-ca-bundle\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.655217 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1e202a6-e40e-4322-b8b3-5d4594b1023e-httpd-config\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.655429 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1e202a6-e40e-4322-b8b3-5d4594b1023e-config\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.667224 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.671960 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmzrc\" (UniqueName: \"kubernetes.io/projected/e1e202a6-e40e-4322-b8b3-5d4594b1023e-kube-api-access-pmzrc\") pod \"neutron-7df8c6fbcc-lbcbf\" (UID: \"e1e202a6-e40e-4322-b8b3-5d4594b1023e\") " pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:03 crc kubenswrapper[5058]: I1014 09:01:03.711216 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.175467 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65cccc65ff-vdh9w"] Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.352929 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7df8c6fbcc-lbcbf"] Oct 14 09:01:04 crc kubenswrapper[5058]: W1014 09:01:04.354281 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e202a6_e40e_4322_b8b3_5d4594b1023e.slice/crio-8744b3ed908eacb3f3fdace4ebcc908aaebda305239c491959fe24c3e63ebabb WatchSource:0}: Error finding container 8744b3ed908eacb3f3fdace4ebcc908aaebda305239c491959fe24c3e63ebabb: Status 404 returned error can't find the container with id 8744b3ed908eacb3f3fdace4ebcc908aaebda305239c491959fe24c3e63ebabb Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.521569 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.671758 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-combined-ca-bundle\") pod \"11a1a79e-ed50-405d-aada-2b4b6a280c98\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.671839 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnbxm\" (UniqueName: \"kubernetes.io/projected/11a1a79e-ed50-405d-aada-2b4b6a280c98-kube-api-access-hnbxm\") pod \"11a1a79e-ed50-405d-aada-2b4b6a280c98\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.671882 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-fernet-keys\") pod \"11a1a79e-ed50-405d-aada-2b4b6a280c98\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.671969 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-config-data\") pod \"11a1a79e-ed50-405d-aada-2b4b6a280c98\" (UID: \"11a1a79e-ed50-405d-aada-2b4b6a280c98\") " Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.675815 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a1a79e-ed50-405d-aada-2b4b6a280c98-kube-api-access-hnbxm" (OuterVolumeSpecName: "kube-api-access-hnbxm") pod "11a1a79e-ed50-405d-aada-2b4b6a280c98" (UID: "11a1a79e-ed50-405d-aada-2b4b6a280c98"). InnerVolumeSpecName "kube-api-access-hnbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.677928 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "11a1a79e-ed50-405d-aada-2b4b6a280c98" (UID: "11a1a79e-ed50-405d-aada-2b4b6a280c98"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.698254 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11a1a79e-ed50-405d-aada-2b4b6a280c98" (UID: "11a1a79e-ed50-405d-aada-2b4b6a280c98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.710746 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-config-data" (OuterVolumeSpecName: "config-data") pod "11a1a79e-ed50-405d-aada-2b4b6a280c98" (UID: "11a1a79e-ed50-405d-aada-2b4b6a280c98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.774181 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.774215 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.774238 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnbxm\" (UniqueName: \"kubernetes.io/projected/11a1a79e-ed50-405d-aada-2b4b6a280c98-kube-api-access-hnbxm\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:04 crc kubenswrapper[5058]: I1014 09:01:04.774248 5058 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/11a1a79e-ed50-405d-aada-2b4b6a280c98-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.184934 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340541-fmsll" event={"ID":"11a1a79e-ed50-405d-aada-2b4b6a280c98","Type":"ContainerDied","Data":"d30ff0591af76c177ca3e8b7ac58a2f8e3675ede600003e91d94f7847c39542f"} Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.185270 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d30ff0591af76c177ca3e8b7ac58a2f8e3675ede600003e91d94f7847c39542f" Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.184982 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340541-fmsll" Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.187473 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7df8c6fbcc-lbcbf" event={"ID":"e1e202a6-e40e-4322-b8b3-5d4594b1023e","Type":"ContainerStarted","Data":"15e5a385ea3ffb86bc1399c2a7ff10ad38483753d520cca0d4760bad444531c0"} Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.187510 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7df8c6fbcc-lbcbf" event={"ID":"e1e202a6-e40e-4322-b8b3-5d4594b1023e","Type":"ContainerStarted","Data":"f6c1f75de59631afeb116c3bbcf5b458a51c88998bf18ee6290e5033b5a1c3d3"} Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.187522 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7df8c6fbcc-lbcbf" event={"ID":"e1e202a6-e40e-4322-b8b3-5d4594b1023e","Type":"ContainerStarted","Data":"8744b3ed908eacb3f3fdace4ebcc908aaebda305239c491959fe24c3e63ebabb"} Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.187950 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.189865 5058 generic.go:334] "Generic (PLEG): container finished" podID="c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" containerID="3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8" exitCode=0 Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.189903 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" event={"ID":"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b","Type":"ContainerDied","Data":"3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8"} Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.189923 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" event={"ID":"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b","Type":"ContainerStarted","Data":"b203b8a5454457f5a17f36fbd0ff9a90f8f3469c1241d1d8faaaa99ca3b918e6"} Oct 14 09:01:05 crc kubenswrapper[5058]: I1014 09:01:05.236513 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7df8c6fbcc-lbcbf" podStartSLOduration=2.236496006 podStartE2EDuration="2.236496006s" podCreationTimestamp="2025-10-14 09:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:01:05.229078604 +0000 UTC m=+8013.140162410" watchObservedRunningTime="2025-10-14 09:01:05.236496006 +0000 UTC m=+8013.147579812" Oct 14 09:01:06 crc kubenswrapper[5058]: I1014 09:01:06.202257 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" event={"ID":"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b","Type":"ContainerStarted","Data":"e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3"} Oct 14 09:01:06 crc kubenswrapper[5058]: I1014 09:01:06.202650 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:06 crc kubenswrapper[5058]: I1014 09:01:06.230725 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" podStartSLOduration=3.230707962 podStartE2EDuration="3.230707962s" podCreationTimestamp="2025-10-14 09:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:01:06.227891761 +0000 UTC m=+8014.138975577" watchObservedRunningTime="2025-10-14 09:01:06.230707962 +0000 UTC m=+8014.141791768" Oct 14 09:01:10 crc kubenswrapper[5058]: I1014 09:01:10.790244 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:01:10 crc kubenswrapper[5058]: E1014 09:01:10.791228 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:01:13 crc kubenswrapper[5058]: I1014 09:01:13.669776 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:13 crc kubenswrapper[5058]: I1014 09:01:13.754990 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dcd687ff-k84ls"] Oct 14 09:01:13 crc kubenswrapper[5058]: I1014 09:01:13.755265 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" podUID="06b27e32-eb16-496c-ad5f-cff2d3e75653" containerName="dnsmasq-dns" containerID="cri-o://13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb" gracePeriod=10 Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.254420 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.294303 5058 generic.go:334] "Generic (PLEG): container finished" podID="06b27e32-eb16-496c-ad5f-cff2d3e75653" containerID="13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb" exitCode=0 Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.294342 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" event={"ID":"06b27e32-eb16-496c-ad5f-cff2d3e75653","Type":"ContainerDied","Data":"13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb"} Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.294367 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" event={"ID":"06b27e32-eb16-496c-ad5f-cff2d3e75653","Type":"ContainerDied","Data":"f45c5caf6cfd099bd92f8beb9573da788cf7b1f57c1323e4240148a02bb13ea5"} Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.294382 5058 scope.go:117] "RemoveContainer" containerID="13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.294483 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dcd687ff-k84ls" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.322268 5058 scope.go:117] "RemoveContainer" containerID="4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.355544 5058 scope.go:117] "RemoveContainer" containerID="13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb" Oct 14 09:01:14 crc kubenswrapper[5058]: E1014 09:01:14.356028 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb\": container with ID starting with 13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb not found: ID does not exist" containerID="13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.356058 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb"} err="failed to get container status \"13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb\": rpc error: code = NotFound desc = could not find container \"13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb\": container with ID starting with 13088670ff4eb89033dc3a2456f06b382a6f153cd20fa04167df4c78434bd6cb not found: ID does not exist" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.356079 5058 scope.go:117] "RemoveContainer" containerID="4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f" Oct 14 09:01:14 crc kubenswrapper[5058]: E1014 09:01:14.356554 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f\": container with ID starting with 4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f not found: ID does not exist" containerID="4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.356576 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f"} err="failed to get container status \"4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f\": rpc error: code = NotFound desc = could not find container \"4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f\": container with ID starting with 4fb64fa52f9aadfdc3db999c1f2cab86c72470b138044aa19b26eb25952dcd4f not found: ID does not exist" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.392437 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9rrv\" (UniqueName: \"kubernetes.io/projected/06b27e32-eb16-496c-ad5f-cff2d3e75653-kube-api-access-h9rrv\") pod \"06b27e32-eb16-496c-ad5f-cff2d3e75653\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.392512 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-dns-svc\") pod \"06b27e32-eb16-496c-ad5f-cff2d3e75653\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.392555 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-config\") pod \"06b27e32-eb16-496c-ad5f-cff2d3e75653\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.392624 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-sb\") pod \"06b27e32-eb16-496c-ad5f-cff2d3e75653\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.392695 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-nb\") pod \"06b27e32-eb16-496c-ad5f-cff2d3e75653\" (UID: \"06b27e32-eb16-496c-ad5f-cff2d3e75653\") " Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.403214 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b27e32-eb16-496c-ad5f-cff2d3e75653-kube-api-access-h9rrv" (OuterVolumeSpecName: "kube-api-access-h9rrv") pod "06b27e32-eb16-496c-ad5f-cff2d3e75653" (UID: "06b27e32-eb16-496c-ad5f-cff2d3e75653"). InnerVolumeSpecName "kube-api-access-h9rrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.436858 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-config" (OuterVolumeSpecName: "config") pod "06b27e32-eb16-496c-ad5f-cff2d3e75653" (UID: "06b27e32-eb16-496c-ad5f-cff2d3e75653"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.446478 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06b27e32-eb16-496c-ad5f-cff2d3e75653" (UID: "06b27e32-eb16-496c-ad5f-cff2d3e75653"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.455158 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06b27e32-eb16-496c-ad5f-cff2d3e75653" (UID: "06b27e32-eb16-496c-ad5f-cff2d3e75653"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.455597 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "06b27e32-eb16-496c-ad5f-cff2d3e75653" (UID: "06b27e32-eb16-496c-ad5f-cff2d3e75653"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.495238 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.495268 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9rrv\" (UniqueName: \"kubernetes.io/projected/06b27e32-eb16-496c-ad5f-cff2d3e75653-kube-api-access-h9rrv\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.495281 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.495289 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.495298 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06b27e32-eb16-496c-ad5f-cff2d3e75653-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.624424 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dcd687ff-k84ls"] Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.636519 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dcd687ff-k84ls"] Oct 14 09:01:14 crc kubenswrapper[5058]: I1014 09:01:14.799720 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b27e32-eb16-496c-ad5f-cff2d3e75653" path="/var/lib/kubelet/pods/06b27e32-eb16-496c-ad5f-cff2d3e75653/volumes" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.543854 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nqdmq"] Oct 14 09:01:17 crc kubenswrapper[5058]: E1014 09:01:17.544653 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b27e32-eb16-496c-ad5f-cff2d3e75653" containerName="init" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.544673 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b27e32-eb16-496c-ad5f-cff2d3e75653" containerName="init" Oct 14 09:01:17 crc kubenswrapper[5058]: E1014 09:01:17.544734 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b27e32-eb16-496c-ad5f-cff2d3e75653" containerName="dnsmasq-dns" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.544746 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b27e32-eb16-496c-ad5f-cff2d3e75653" containerName="dnsmasq-dns" Oct 14 09:01:17 crc kubenswrapper[5058]: E1014 09:01:17.544785 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a1a79e-ed50-405d-aada-2b4b6a280c98" containerName="keystone-cron" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.544824 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a1a79e-ed50-405d-aada-2b4b6a280c98" containerName="keystone-cron" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.545095 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a1a79e-ed50-405d-aada-2b4b6a280c98" containerName="keystone-cron" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.545147 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b27e32-eb16-496c-ad5f-cff2d3e75653" containerName="dnsmasq-dns" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.547503 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.586312 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqdmq"] Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.654648 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-utilities\") pod \"redhat-operators-nqdmq\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.655485 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-catalog-content\") pod \"redhat-operators-nqdmq\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.655685 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9xpp\" (UniqueName: \"kubernetes.io/projected/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-kube-api-access-f9xpp\") pod \"redhat-operators-nqdmq\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.756960 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-utilities\") pod \"redhat-operators-nqdmq\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.757047 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-catalog-content\") pod \"redhat-operators-nqdmq\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.757098 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9xpp\" (UniqueName: \"kubernetes.io/projected/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-kube-api-access-f9xpp\") pod \"redhat-operators-nqdmq\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.758007 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-catalog-content\") pod \"redhat-operators-nqdmq\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.758011 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-utilities\") pod \"redhat-operators-nqdmq\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.781873 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9xpp\" (UniqueName: \"kubernetes.io/projected/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-kube-api-access-f9xpp\") pod \"redhat-operators-nqdmq\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:17 crc kubenswrapper[5058]: I1014 09:01:17.880085 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:18 crc kubenswrapper[5058]: I1014 09:01:18.371313 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqdmq"] Oct 14 09:01:18 crc kubenswrapper[5058]: W1014 09:01:18.383936 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5a7109c_7aa1_4a09_9641_22a2758fe6bb.slice/crio-f462865fee3040ef5b1231e018c50ff404d12bb7310a7eb094b0ca5717ce4b57 WatchSource:0}: Error finding container f462865fee3040ef5b1231e018c50ff404d12bb7310a7eb094b0ca5717ce4b57: Status 404 returned error can't find the container with id f462865fee3040ef5b1231e018c50ff404d12bb7310a7eb094b0ca5717ce4b57 Oct 14 09:01:19 crc kubenswrapper[5058]: I1014 09:01:19.351139 5058 generic.go:334] "Generic (PLEG): container finished" podID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerID="624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633" exitCode=0 Oct 14 09:01:19 crc kubenswrapper[5058]: I1014 09:01:19.351205 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqdmq" event={"ID":"f5a7109c-7aa1-4a09-9641-22a2758fe6bb","Type":"ContainerDied","Data":"624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633"} Oct 14 09:01:19 crc kubenswrapper[5058]: I1014 09:01:19.351243 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqdmq" event={"ID":"f5a7109c-7aa1-4a09-9641-22a2758fe6bb","Type":"ContainerStarted","Data":"f462865fee3040ef5b1231e018c50ff404d12bb7310a7eb094b0ca5717ce4b57"} Oct 14 09:01:21 crc kubenswrapper[5058]: I1014 09:01:21.790279 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:01:21 crc kubenswrapper[5058]: E1014 09:01:21.791673 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:01:22 crc kubenswrapper[5058]: I1014 09:01:22.380995 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqdmq" event={"ID":"f5a7109c-7aa1-4a09-9641-22a2758fe6bb","Type":"ContainerStarted","Data":"43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed"} Oct 14 09:01:24 crc kubenswrapper[5058]: I1014 09:01:24.408671 5058 generic.go:334] "Generic (PLEG): container finished" podID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerID="43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed" exitCode=0 Oct 14 09:01:24 crc kubenswrapper[5058]: I1014 09:01:24.409025 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqdmq" event={"ID":"f5a7109c-7aa1-4a09-9641-22a2758fe6bb","Type":"ContainerDied","Data":"43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed"} Oct 14 09:01:25 crc kubenswrapper[5058]: I1014 09:01:25.427640 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqdmq" event={"ID":"f5a7109c-7aa1-4a09-9641-22a2758fe6bb","Type":"ContainerStarted","Data":"6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de"} Oct 14 09:01:25 crc kubenswrapper[5058]: I1014 09:01:25.471903 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nqdmq" podStartSLOduration=2.980430418 podStartE2EDuration="8.471872946s" podCreationTimestamp="2025-10-14 09:01:17 +0000 UTC" firstStartedPulling="2025-10-14 09:01:19.353843972 +0000 UTC m=+8027.264927808" lastFinishedPulling="2025-10-14 09:01:24.84528649 +0000 UTC m=+8032.756370336" observedRunningTime="2025-10-14 09:01:25.459003678 +0000 UTC m=+8033.370087554" watchObservedRunningTime="2025-10-14 09:01:25.471872946 +0000 UTC m=+8033.382956792" Oct 14 09:01:27 crc kubenswrapper[5058]: I1014 09:01:27.880606 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:27 crc kubenswrapper[5058]: I1014 09:01:27.880687 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:28 crc kubenswrapper[5058]: I1014 09:01:28.961362 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nqdmq" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerName="registry-server" probeResult="failure" output=< Oct 14 09:01:28 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 09:01:28 crc kubenswrapper[5058]: > Oct 14 09:01:33 crc kubenswrapper[5058]: I1014 09:01:33.724510 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7df8c6fbcc-lbcbf" Oct 14 09:01:36 crc kubenswrapper[5058]: I1014 09:01:36.790401 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:01:36 crc kubenswrapper[5058]: E1014 09:01:36.791171 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:01:37 crc kubenswrapper[5058]: I1014 09:01:37.964618 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:38 crc kubenswrapper[5058]: I1014 09:01:38.039070 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:38 crc kubenswrapper[5058]: I1014 09:01:38.227236 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqdmq"] Oct 14 09:01:39 crc kubenswrapper[5058]: I1014 09:01:39.600865 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nqdmq" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerName="registry-server" containerID="cri-o://6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de" gracePeriod=2 Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.244356 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.337053 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-catalog-content\") pod \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.337117 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9xpp\" (UniqueName: \"kubernetes.io/projected/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-kube-api-access-f9xpp\") pod \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.337160 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-utilities\") pod \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\" (UID: \"f5a7109c-7aa1-4a09-9641-22a2758fe6bb\") " Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.338275 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-utilities" (OuterVolumeSpecName: "utilities") pod "f5a7109c-7aa1-4a09-9641-22a2758fe6bb" (UID: "f5a7109c-7aa1-4a09-9641-22a2758fe6bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.343494 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-kube-api-access-f9xpp" (OuterVolumeSpecName: "kube-api-access-f9xpp") pod "f5a7109c-7aa1-4a09-9641-22a2758fe6bb" (UID: "f5a7109c-7aa1-4a09-9641-22a2758fe6bb"). InnerVolumeSpecName "kube-api-access-f9xpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.439400 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9xpp\" (UniqueName: \"kubernetes.io/projected/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-kube-api-access-f9xpp\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.439483 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.461677 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5a7109c-7aa1-4a09-9641-22a2758fe6bb" (UID: "f5a7109c-7aa1-4a09-9641-22a2758fe6bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.541335 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a7109c-7aa1-4a09-9641-22a2758fe6bb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.617259 5058 generic.go:334] "Generic (PLEG): container finished" podID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerID="6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de" exitCode=0 Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.617358 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqdmq" event={"ID":"f5a7109c-7aa1-4a09-9641-22a2758fe6bb","Type":"ContainerDied","Data":"6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de"} Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.618520 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqdmq" event={"ID":"f5a7109c-7aa1-4a09-9641-22a2758fe6bb","Type":"ContainerDied","Data":"f462865fee3040ef5b1231e018c50ff404d12bb7310a7eb094b0ca5717ce4b57"} Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.617367 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqdmq" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.618572 5058 scope.go:117] "RemoveContainer" containerID="6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.681530 5058 scope.go:117] "RemoveContainer" containerID="43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.691714 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqdmq"] Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.700950 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nqdmq"] Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.706730 5058 scope.go:117] "RemoveContainer" containerID="624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.745244 5058 scope.go:117] "RemoveContainer" containerID="6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de" Oct 14 09:01:40 crc kubenswrapper[5058]: E1014 09:01:40.748684 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de\": container with ID starting with 6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de not found: ID does not exist" containerID="6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.748979 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de"} err="failed to get container status \"6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de\": rpc error: code = NotFound desc = could not find container \"6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de\": container with ID starting with 6d9837b59f3ab34feb1c4d096b372cc04edd6f3ec77e00cf99adbd2a8ecf53de not found: ID does not exist" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.749363 5058 scope.go:117] "RemoveContainer" containerID="43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed" Oct 14 09:01:40 crc kubenswrapper[5058]: E1014 09:01:40.750103 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed\": container with ID starting with 43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed not found: ID does not exist" containerID="43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.750303 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed"} err="failed to get container status \"43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed\": rpc error: code = NotFound desc = could not find container \"43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed\": container with ID starting with 43ca7892c609481b63ffb14cbe641a3a6659513fcc55155c9f334ee596e560ed not found: ID does not exist" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.750990 5058 scope.go:117] "RemoveContainer" containerID="624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633" Oct 14 09:01:40 crc kubenswrapper[5058]: E1014 09:01:40.751588 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633\": container with ID starting with 624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633 not found: ID does not exist" containerID="624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.751823 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633"} err="failed to get container status \"624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633\": rpc error: code = NotFound desc = could not find container \"624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633\": container with ID starting with 624cafc79d6860120a9461e724dae821706c9f4ba29c4d82ccf4aea303a67633 not found: ID does not exist" Oct 14 09:01:40 crc kubenswrapper[5058]: I1014 09:01:40.811068 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" path="/var/lib/kubelet/pods/f5a7109c-7aa1-4a09-9641-22a2758fe6bb/volumes" Oct 14 09:01:42 crc kubenswrapper[5058]: I1014 09:01:42.157500 5058 scope.go:117] "RemoveContainer" containerID="311466384c17f1fe3cbe5c22fe939191c35a4c3b551aafb073956c113ee9c1e0" Oct 14 09:01:42 crc kubenswrapper[5058]: I1014 09:01:42.191291 5058 scope.go:117] "RemoveContainer" containerID="aa0391d826a2faa27c84406ffbe849c1a00fea4a6333aa483dc2af8b806759f8" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.853637 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2nwh9"] Oct 14 09:01:44 crc kubenswrapper[5058]: E1014 09:01:44.854689 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerName="registry-server" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.854707 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerName="registry-server" Oct 14 09:01:44 crc kubenswrapper[5058]: E1014 09:01:44.854739 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerName="extract-utilities" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.854746 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerName="extract-utilities" Oct 14 09:01:44 crc kubenswrapper[5058]: E1014 09:01:44.854757 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerName="extract-content" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.854762 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerName="extract-content" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.854957 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a7109c-7aa1-4a09-9641-22a2758fe6bb" containerName="registry-server" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.855533 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2nwh9"] Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.855614 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.859016 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.859205 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7mnhg" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.859377 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.859500 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.859590 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.945467 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-dispersionconf\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.945537 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8286808-dcf6-4843-9278-43e3983998ae-etc-swift\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.945575 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-scripts\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.945599 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-ring-data-devices\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.948042 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-combined-ca-bundle\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.948110 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khddk\" (UniqueName: \"kubernetes.io/projected/d8286808-dcf6-4843-9278-43e3983998ae-kube-api-access-khddk\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.948146 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-swiftconf\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.960612 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b75cc767-7jsdd"] Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.962422 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:44 crc kubenswrapper[5058]: I1014 09:01:44.987000 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b75cc767-7jsdd"] Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.050951 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-swiftconf\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051017 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-config\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051036 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-dispersionconf\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051063 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8286808-dcf6-4843-9278-43e3983998ae-etc-swift\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051084 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-sb\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051451 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-scripts\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051500 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-ring-data-devices\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051566 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-nb\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051729 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54p7\" (UniqueName: \"kubernetes.io/projected/c91d0e94-33c8-46be-ae31-1191e59b3dd0-kube-api-access-x54p7\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051754 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-dns-svc\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051846 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-combined-ca-bundle\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.051888 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khddk\" (UniqueName: \"kubernetes.io/projected/d8286808-dcf6-4843-9278-43e3983998ae-kube-api-access-khddk\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.052219 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-scripts\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.052451 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-ring-data-devices\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.053360 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8286808-dcf6-4843-9278-43e3983998ae-etc-swift\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.057514 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-combined-ca-bundle\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.057683 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-swiftconf\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.057983 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-dispersionconf\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.077546 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khddk\" (UniqueName: \"kubernetes.io/projected/d8286808-dcf6-4843-9278-43e3983998ae-kube-api-access-khddk\") pod \"swift-ring-rebalance-2nwh9\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.154098 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-config\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.154181 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-sb\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.154242 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-nb\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.154296 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x54p7\" (UniqueName: \"kubernetes.io/projected/c91d0e94-33c8-46be-ae31-1191e59b3dd0-kube-api-access-x54p7\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.154313 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-dns-svc\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.155165 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-dns-svc\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.155312 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-sb\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.155959 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-config\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.156132 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-nb\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.174733 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.174790 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x54p7\" (UniqueName: \"kubernetes.io/projected/c91d0e94-33c8-46be-ae31-1191e59b3dd0-kube-api-access-x54p7\") pod \"dnsmasq-dns-76b75cc767-7jsdd\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.289877 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.607147 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2nwh9"] Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.679183 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2nwh9" event={"ID":"d8286808-dcf6-4843-9278-43e3983998ae","Type":"ContainerStarted","Data":"5ab44e778582130943b494a9942f992a0339351aa382576edb6dc85cf47ed65a"} Oct 14 09:01:45 crc kubenswrapper[5058]: W1014 09:01:45.767279 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc91d0e94_33c8_46be_ae31_1191e59b3dd0.slice/crio-22f715c61865c3fec6820ee80d27973261eea353757e127a011b8b6f79c96d0e WatchSource:0}: Error finding container 22f715c61865c3fec6820ee80d27973261eea353757e127a011b8b6f79c96d0e: Status 404 returned error can't find the container with id 22f715c61865c3fec6820ee80d27973261eea353757e127a011b8b6f79c96d0e Oct 14 09:01:45 crc kubenswrapper[5058]: I1014 09:01:45.770879 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b75cc767-7jsdd"] Oct 14 09:01:46 crc kubenswrapper[5058]: I1014 09:01:46.689341 5058 generic.go:334] "Generic (PLEG): container finished" podID="c91d0e94-33c8-46be-ae31-1191e59b3dd0" containerID="c4d0ad77d6b3a7f60820e035530099e136a763e49317b49905571c4ae04c696a" exitCode=0 Oct 14 09:01:46 crc kubenswrapper[5058]: I1014 09:01:46.689431 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" event={"ID":"c91d0e94-33c8-46be-ae31-1191e59b3dd0","Type":"ContainerDied","Data":"c4d0ad77d6b3a7f60820e035530099e136a763e49317b49905571c4ae04c696a"} Oct 14 09:01:46 crc kubenswrapper[5058]: I1014 09:01:46.689740 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" event={"ID":"c91d0e94-33c8-46be-ae31-1191e59b3dd0","Type":"ContainerStarted","Data":"22f715c61865c3fec6820ee80d27973261eea353757e127a011b8b6f79c96d0e"} Oct 14 09:01:47 crc kubenswrapper[5058]: I1014 09:01:47.711227 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" event={"ID":"c91d0e94-33c8-46be-ae31-1191e59b3dd0","Type":"ContainerStarted","Data":"0b3b2dde6364631ea68b96ed909ba864efb6ae004e0264df5672f9ac4f602048"} Oct 14 09:01:47 crc kubenswrapper[5058]: I1014 09:01:47.711429 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:47 crc kubenswrapper[5058]: I1014 09:01:47.731484 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" podStartSLOduration=3.731466681 podStartE2EDuration="3.731466681s" podCreationTimestamp="2025-10-14 09:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:01:47.730174684 +0000 UTC m=+8055.641258490" watchObservedRunningTime="2025-10-14 09:01:47.731466681 +0000 UTC m=+8055.642550487" Oct 14 09:01:47 crc kubenswrapper[5058]: I1014 09:01:47.869495 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-86cccbdf6-gvj7b"] Oct 14 09:01:47 crc kubenswrapper[5058]: I1014 09:01:47.871012 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:47 crc kubenswrapper[5058]: I1014 09:01:47.875537 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 14 09:01:47 crc kubenswrapper[5058]: I1014 09:01:47.878695 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86cccbdf6-gvj7b"] Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.007324 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/032ce0df-73cb-4f14-89ea-400cb13781a5-etc-swift\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.007403 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/032ce0df-73cb-4f14-89ea-400cb13781a5-log-httpd\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.007867 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/032ce0df-73cb-4f14-89ea-400cb13781a5-run-httpd\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.007958 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032ce0df-73cb-4f14-89ea-400cb13781a5-config-data\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.008490 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7x7\" (UniqueName: \"kubernetes.io/projected/032ce0df-73cb-4f14-89ea-400cb13781a5-kube-api-access-sq7x7\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.008578 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032ce0df-73cb-4f14-89ea-400cb13781a5-combined-ca-bundle\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.110353 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/032ce0df-73cb-4f14-89ea-400cb13781a5-log-httpd\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.110675 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/032ce0df-73cb-4f14-89ea-400cb13781a5-run-httpd\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.110712 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032ce0df-73cb-4f14-89ea-400cb13781a5-config-data\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.110889 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq7x7\" (UniqueName: \"kubernetes.io/projected/032ce0df-73cb-4f14-89ea-400cb13781a5-kube-api-access-sq7x7\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.110921 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/032ce0df-73cb-4f14-89ea-400cb13781a5-log-httpd\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.110934 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032ce0df-73cb-4f14-89ea-400cb13781a5-combined-ca-bundle\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.110997 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/032ce0df-73cb-4f14-89ea-400cb13781a5-run-httpd\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.111083 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/032ce0df-73cb-4f14-89ea-400cb13781a5-etc-swift\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.116431 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032ce0df-73cb-4f14-89ea-400cb13781a5-config-data\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.118960 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/032ce0df-73cb-4f14-89ea-400cb13781a5-etc-swift\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.125753 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq7x7\" (UniqueName: \"kubernetes.io/projected/032ce0df-73cb-4f14-89ea-400cb13781a5-kube-api-access-sq7x7\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.127398 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032ce0df-73cb-4f14-89ea-400cb13781a5-combined-ca-bundle\") pod \"swift-proxy-86cccbdf6-gvj7b\" (UID: \"032ce0df-73cb-4f14-89ea-400cb13781a5\") " pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.194925 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:48 crc kubenswrapper[5058]: I1014 09:01:48.790189 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:01:48 crc kubenswrapper[5058]: E1014 09:01:48.790427 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:01:50 crc kubenswrapper[5058]: I1014 09:01:50.293903 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86cccbdf6-gvj7b"] Oct 14 09:01:50 crc kubenswrapper[5058]: W1014 09:01:50.299405 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod032ce0df_73cb_4f14_89ea_400cb13781a5.slice/crio-ac5922e6abefad960c19ab344b5cee70f814f066b11d5fb949f8b892a3657d4b WatchSource:0}: Error finding container ac5922e6abefad960c19ab344b5cee70f814f066b11d5fb949f8b892a3657d4b: Status 404 returned error can't find the container with id ac5922e6abefad960c19ab344b5cee70f814f066b11d5fb949f8b892a3657d4b Oct 14 09:01:50 crc kubenswrapper[5058]: I1014 09:01:50.745584 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2nwh9" event={"ID":"d8286808-dcf6-4843-9278-43e3983998ae","Type":"ContainerStarted","Data":"f38bc9768c2d43410b9cb628e899c70c6cad9e624ff8459564730b30126ffb2e"} Oct 14 09:01:50 crc kubenswrapper[5058]: I1014 09:01:50.747864 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86cccbdf6-gvj7b" event={"ID":"032ce0df-73cb-4f14-89ea-400cb13781a5","Type":"ContainerStarted","Data":"fc6172dd6a43e2f52c0c0b247f1791668e355c85ab5f8fd957ab945b0d55cac7"} Oct 14 09:01:50 crc kubenswrapper[5058]: I1014 09:01:50.747890 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86cccbdf6-gvj7b" event={"ID":"032ce0df-73cb-4f14-89ea-400cb13781a5","Type":"ContainerStarted","Data":"ac5922e6abefad960c19ab344b5cee70f814f066b11d5fb949f8b892a3657d4b"} Oct 14 09:01:50 crc kubenswrapper[5058]: I1014 09:01:50.766657 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2nwh9" podStartSLOduration=2.749005561 podStartE2EDuration="6.766632201s" podCreationTimestamp="2025-10-14 09:01:44 +0000 UTC" firstStartedPulling="2025-10-14 09:01:45.60282337 +0000 UTC m=+8053.513907176" lastFinishedPulling="2025-10-14 09:01:49.62045001 +0000 UTC m=+8057.531533816" observedRunningTime="2025-10-14 09:01:50.760475645 +0000 UTC m=+8058.671559501" watchObservedRunningTime="2025-10-14 09:01:50.766632201 +0000 UTC m=+8058.677716037" Oct 14 09:01:51 crc kubenswrapper[5058]: I1014 09:01:51.802316 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86cccbdf6-gvj7b" event={"ID":"032ce0df-73cb-4f14-89ea-400cb13781a5","Type":"ContainerStarted","Data":"3ce09390ad40d7ec37666af2d48f2611ef4bc4f30f8748724bc8a4534d688596"} Oct 14 09:01:51 crc kubenswrapper[5058]: I1014 09:01:51.824988 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:51 crc kubenswrapper[5058]: I1014 09:01:51.825038 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:53 crc kubenswrapper[5058]: I1014 09:01:53.833887 5058 generic.go:334] "Generic (PLEG): container finished" podID="d8286808-dcf6-4843-9278-43e3983998ae" containerID="f38bc9768c2d43410b9cb628e899c70c6cad9e624ff8459564730b30126ffb2e" exitCode=0 Oct 14 09:01:53 crc kubenswrapper[5058]: I1014 09:01:53.834031 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2nwh9" event={"ID":"d8286808-dcf6-4843-9278-43e3983998ae","Type":"ContainerDied","Data":"f38bc9768c2d43410b9cb628e899c70c6cad9e624ff8459564730b30126ffb2e"} Oct 14 09:01:53 crc kubenswrapper[5058]: I1014 09:01:53.866970 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-86cccbdf6-gvj7b" podStartSLOduration=6.866901132 podStartE2EDuration="6.866901132s" podCreationTimestamp="2025-10-14 09:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:01:51.83545164 +0000 UTC m=+8059.746535466" watchObservedRunningTime="2025-10-14 09:01:53.866901132 +0000 UTC m=+8061.777984948" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.266324 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.292940 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.370283 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65cccc65ff-vdh9w"] Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.370803 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" podUID="c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" containerName="dnsmasq-dns" containerID="cri-o://e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3" gracePeriod=10 Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.379207 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-swiftconf\") pod \"d8286808-dcf6-4843-9278-43e3983998ae\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.380240 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-dispersionconf\") pod \"d8286808-dcf6-4843-9278-43e3983998ae\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.380336 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-combined-ca-bundle\") pod \"d8286808-dcf6-4843-9278-43e3983998ae\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.380378 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-ring-data-devices\") pod \"d8286808-dcf6-4843-9278-43e3983998ae\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.380415 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8286808-dcf6-4843-9278-43e3983998ae-etc-swift\") pod \"d8286808-dcf6-4843-9278-43e3983998ae\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.380443 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khddk\" (UniqueName: \"kubernetes.io/projected/d8286808-dcf6-4843-9278-43e3983998ae-kube-api-access-khddk\") pod \"d8286808-dcf6-4843-9278-43e3983998ae\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.380501 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-scripts\") pod \"d8286808-dcf6-4843-9278-43e3983998ae\" (UID: \"d8286808-dcf6-4843-9278-43e3983998ae\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.383370 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d8286808-dcf6-4843-9278-43e3983998ae" (UID: "d8286808-dcf6-4843-9278-43e3983998ae"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.386955 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8286808-dcf6-4843-9278-43e3983998ae-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d8286808-dcf6-4843-9278-43e3983998ae" (UID: "d8286808-dcf6-4843-9278-43e3983998ae"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.390738 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8286808-dcf6-4843-9278-43e3983998ae-kube-api-access-khddk" (OuterVolumeSpecName: "kube-api-access-khddk") pod "d8286808-dcf6-4843-9278-43e3983998ae" (UID: "d8286808-dcf6-4843-9278-43e3983998ae"). InnerVolumeSpecName "kube-api-access-khddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.414986 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d8286808-dcf6-4843-9278-43e3983998ae" (UID: "d8286808-dcf6-4843-9278-43e3983998ae"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.420803 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8286808-dcf6-4843-9278-43e3983998ae" (UID: "d8286808-dcf6-4843-9278-43e3983998ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.437691 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-scripts" (OuterVolumeSpecName: "scripts") pod "d8286808-dcf6-4843-9278-43e3983998ae" (UID: "d8286808-dcf6-4843-9278-43e3983998ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.450103 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d8286808-dcf6-4843-9278-43e3983998ae" (UID: "d8286808-dcf6-4843-9278-43e3983998ae"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.482664 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.482851 5058 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.482932 5058 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.483029 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8286808-dcf6-4843-9278-43e3983998ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.483104 5058 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8286808-dcf6-4843-9278-43e3983998ae-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.483188 5058 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8286808-dcf6-4843-9278-43e3983998ae-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.483265 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khddk\" (UniqueName: \"kubernetes.io/projected/d8286808-dcf6-4843-9278-43e3983998ae-kube-api-access-khddk\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.801210 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.866312 5058 generic.go:334] "Generic (PLEG): container finished" podID="c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" containerID="e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3" exitCode=0 Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.866382 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" event={"ID":"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b","Type":"ContainerDied","Data":"e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3"} Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.866410 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" event={"ID":"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b","Type":"ContainerDied","Data":"b203b8a5454457f5a17f36fbd0ff9a90f8f3469c1241d1d8faaaa99ca3b918e6"} Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.866431 5058 scope.go:117] "RemoveContainer" containerID="e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.866591 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65cccc65ff-vdh9w" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.868740 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2nwh9" event={"ID":"d8286808-dcf6-4843-9278-43e3983998ae","Type":"ContainerDied","Data":"5ab44e778582130943b494a9942f992a0339351aa382576edb6dc85cf47ed65a"} Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.868784 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab44e778582130943b494a9942f992a0339351aa382576edb6dc85cf47ed65a" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.868896 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2nwh9" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.884776 5058 scope.go:117] "RemoveContainer" containerID="3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.888577 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-sb\") pod \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.888737 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-dns-svc\") pod \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.888838 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pz46\" (UniqueName: \"kubernetes.io/projected/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-kube-api-access-4pz46\") pod \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.888913 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-config\") pod \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.888933 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-nb\") pod \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\" (UID: \"c9bbeb01-6063-4fca-a64f-94ad0d63cf8b\") " Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.909316 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-kube-api-access-4pz46" (OuterVolumeSpecName: "kube-api-access-4pz46") pod "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" (UID: "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b"). InnerVolumeSpecName "kube-api-access-4pz46". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.920077 5058 scope.go:117] "RemoveContainer" containerID="e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3" Oct 14 09:01:55 crc kubenswrapper[5058]: E1014 09:01:55.920745 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3\": container with ID starting with e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3 not found: ID does not exist" containerID="e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.920789 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3"} err="failed to get container status \"e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3\": rpc error: code = NotFound desc = could not find container \"e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3\": container with ID starting with e54c2286084336b5b7e53977bfe24ff7f565ba2ff016c7a62e218791d41fe5e3 not found: ID does not exist" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.920823 5058 scope.go:117] "RemoveContainer" containerID="3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8" Oct 14 09:01:55 crc kubenswrapper[5058]: E1014 09:01:55.923322 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8\": container with ID starting with 3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8 not found: ID does not exist" containerID="3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.923365 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8"} err="failed to get container status \"3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8\": rpc error: code = NotFound desc = could not find container \"3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8\": container with ID starting with 3b5f082a65f8457cfc04e59a6e2dda53e1de7dd94ee7d4ed41cb82d0997c64e8 not found: ID does not exist" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.966116 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" (UID: "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.967250 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-config" (OuterVolumeSpecName: "config") pod "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" (UID: "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.968280 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" (UID: "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.981514 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" (UID: "c9bbeb01-6063-4fca-a64f-94ad0d63cf8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.990950 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.990980 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pz46\" (UniqueName: \"kubernetes.io/projected/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-kube-api-access-4pz46\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.990992 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.991004 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:55 crc kubenswrapper[5058]: I1014 09:01:55.991013 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:01:56 crc kubenswrapper[5058]: I1014 09:01:56.238023 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65cccc65ff-vdh9w"] Oct 14 09:01:56 crc kubenswrapper[5058]: I1014 09:01:56.245130 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65cccc65ff-vdh9w"] Oct 14 09:01:56 crc kubenswrapper[5058]: I1014 09:01:56.799647 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" path="/var/lib/kubelet/pods/c9bbeb01-6063-4fca-a64f-94ad0d63cf8b/volumes" Oct 14 09:01:58 crc kubenswrapper[5058]: I1014 09:01:58.197440 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:58 crc kubenswrapper[5058]: I1014 09:01:58.200759 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86cccbdf6-gvj7b" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.134541 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jzg9h"] Oct 14 09:01:59 crc kubenswrapper[5058]: E1014 09:01:59.135986 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" containerName="dnsmasq-dns" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.136004 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" containerName="dnsmasq-dns" Oct 14 09:01:59 crc kubenswrapper[5058]: E1014 09:01:59.136022 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8286808-dcf6-4843-9278-43e3983998ae" containerName="swift-ring-rebalance" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.136030 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8286808-dcf6-4843-9278-43e3983998ae" containerName="swift-ring-rebalance" Oct 14 09:01:59 crc kubenswrapper[5058]: E1014 09:01:59.136041 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" containerName="init" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.136047 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" containerName="init" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.136574 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bbeb01-6063-4fca-a64f-94ad0d63cf8b" containerName="dnsmasq-dns" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.136590 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8286808-dcf6-4843-9278-43e3983998ae" containerName="swift-ring-rebalance" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.140265 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.161292 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-catalog-content\") pod \"community-operators-jzg9h\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.161397 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-utilities\") pod \"community-operators-jzg9h\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.161427 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vp6p\" (UniqueName: \"kubernetes.io/projected/adc30bca-3508-4e69-be81-48f9bc7b4d14-kube-api-access-2vp6p\") pod \"community-operators-jzg9h\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.180458 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzg9h"] Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.263495 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-utilities\") pod \"community-operators-jzg9h\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.263795 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vp6p\" (UniqueName: \"kubernetes.io/projected/adc30bca-3508-4e69-be81-48f9bc7b4d14-kube-api-access-2vp6p\") pod \"community-operators-jzg9h\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.263897 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-catalog-content\") pod \"community-operators-jzg9h\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.264285 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-catalog-content\") pod \"community-operators-jzg9h\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.264295 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-utilities\") pod \"community-operators-jzg9h\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.287819 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vp6p\" (UniqueName: \"kubernetes.io/projected/adc30bca-3508-4e69-be81-48f9bc7b4d14-kube-api-access-2vp6p\") pod \"community-operators-jzg9h\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:01:59 crc kubenswrapper[5058]: I1014 09:01:59.474205 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:02:00 crc kubenswrapper[5058]: I1014 09:02:00.006614 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzg9h"] Oct 14 09:02:00 crc kubenswrapper[5058]: I1014 09:02:00.918687 5058 generic.go:334] "Generic (PLEG): container finished" podID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerID="24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8" exitCode=0 Oct 14 09:02:00 crc kubenswrapper[5058]: I1014 09:02:00.918755 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzg9h" event={"ID":"adc30bca-3508-4e69-be81-48f9bc7b4d14","Type":"ContainerDied","Data":"24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8"} Oct 14 09:02:00 crc kubenswrapper[5058]: I1014 09:02:00.919056 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzg9h" event={"ID":"adc30bca-3508-4e69-be81-48f9bc7b4d14","Type":"ContainerStarted","Data":"221eb59c535588d61d1971b661b75e410acf90a051aac486a5936f73224d62d7"} Oct 14 09:02:00 crc kubenswrapper[5058]: I1014 09:02:00.921111 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:02:01 crc kubenswrapper[5058]: I1014 09:02:01.791564 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:02:01 crc kubenswrapper[5058]: E1014 09:02:01.792878 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:02:02 crc kubenswrapper[5058]: I1014 09:02:02.950331 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzg9h" event={"ID":"adc30bca-3508-4e69-be81-48f9bc7b4d14","Type":"ContainerStarted","Data":"17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4"} Oct 14 09:02:03 crc kubenswrapper[5058]: I1014 09:02:03.962035 5058 generic.go:334] "Generic (PLEG): container finished" podID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerID="17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4" exitCode=0 Oct 14 09:02:03 crc kubenswrapper[5058]: I1014 09:02:03.962132 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzg9h" event={"ID":"adc30bca-3508-4e69-be81-48f9bc7b4d14","Type":"ContainerDied","Data":"17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4"} Oct 14 09:02:04 crc kubenswrapper[5058]: I1014 09:02:04.979848 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzg9h" event={"ID":"adc30bca-3508-4e69-be81-48f9bc7b4d14","Type":"ContainerStarted","Data":"72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9"} Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.019516 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jzg9h" podStartSLOduration=2.2918339420000002 podStartE2EDuration="6.019498982s" podCreationTimestamp="2025-10-14 09:01:59 +0000 UTC" firstStartedPulling="2025-10-14 09:02:00.920643659 +0000 UTC m=+8068.831727495" lastFinishedPulling="2025-10-14 09:02:04.648308689 +0000 UTC m=+8072.559392535" observedRunningTime="2025-10-14 09:02:05.012379548 +0000 UTC m=+8072.923463374" watchObservedRunningTime="2025-10-14 09:02:05.019498982 +0000 UTC m=+8072.930582778" Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.102622 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-54zzh"] Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.103665 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zzh" Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.120934 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-54zzh"] Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.182463 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgq4f\" (UniqueName: \"kubernetes.io/projected/654849e1-fb87-435b-9e65-c6f93143a639-kube-api-access-hgq4f\") pod \"cinder-db-create-54zzh\" (UID: \"654849e1-fb87-435b-9e65-c6f93143a639\") " pod="openstack/cinder-db-create-54zzh" Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.284532 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgq4f\" (UniqueName: \"kubernetes.io/projected/654849e1-fb87-435b-9e65-c6f93143a639-kube-api-access-hgq4f\") pod \"cinder-db-create-54zzh\" (UID: \"654849e1-fb87-435b-9e65-c6f93143a639\") " pod="openstack/cinder-db-create-54zzh" Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.305377 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgq4f\" (UniqueName: \"kubernetes.io/projected/654849e1-fb87-435b-9e65-c6f93143a639-kube-api-access-hgq4f\") pod \"cinder-db-create-54zzh\" (UID: \"654849e1-fb87-435b-9e65-c6f93143a639\") " pod="openstack/cinder-db-create-54zzh" Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.422985 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zzh" Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.926237 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-54zzh"] Oct 14 09:02:05 crc kubenswrapper[5058]: W1014 09:02:05.933213 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod654849e1_fb87_435b_9e65_c6f93143a639.slice/crio-59b7ee159579d29107c9059cefb1824472aa2818d15716206b9c3b126670ca58 WatchSource:0}: Error finding container 59b7ee159579d29107c9059cefb1824472aa2818d15716206b9c3b126670ca58: Status 404 returned error can't find the container with id 59b7ee159579d29107c9059cefb1824472aa2818d15716206b9c3b126670ca58 Oct 14 09:02:05 crc kubenswrapper[5058]: I1014 09:02:05.988717 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-54zzh" event={"ID":"654849e1-fb87-435b-9e65-c6f93143a639","Type":"ContainerStarted","Data":"59b7ee159579d29107c9059cefb1824472aa2818d15716206b9c3b126670ca58"} Oct 14 09:02:07 crc kubenswrapper[5058]: I1014 09:02:07.002848 5058 generic.go:334] "Generic (PLEG): container finished" podID="654849e1-fb87-435b-9e65-c6f93143a639" containerID="c058211268c94a15b18e8e233879faf2a1d91479fe0b1fbef79060470c35ab35" exitCode=0 Oct 14 09:02:07 crc kubenswrapper[5058]: I1014 09:02:07.003160 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-54zzh" event={"ID":"654849e1-fb87-435b-9e65-c6f93143a639","Type":"ContainerDied","Data":"c058211268c94a15b18e8e233879faf2a1d91479fe0b1fbef79060470c35ab35"} Oct 14 09:02:08 crc kubenswrapper[5058]: I1014 09:02:08.402479 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zzh" Oct 14 09:02:08 crc kubenswrapper[5058]: I1014 09:02:08.448515 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgq4f\" (UniqueName: \"kubernetes.io/projected/654849e1-fb87-435b-9e65-c6f93143a639-kube-api-access-hgq4f\") pod \"654849e1-fb87-435b-9e65-c6f93143a639\" (UID: \"654849e1-fb87-435b-9e65-c6f93143a639\") " Oct 14 09:02:08 crc kubenswrapper[5058]: I1014 09:02:08.461021 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654849e1-fb87-435b-9e65-c6f93143a639-kube-api-access-hgq4f" (OuterVolumeSpecName: "kube-api-access-hgq4f") pod "654849e1-fb87-435b-9e65-c6f93143a639" (UID: "654849e1-fb87-435b-9e65-c6f93143a639"). InnerVolumeSpecName "kube-api-access-hgq4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:02:08 crc kubenswrapper[5058]: I1014 09:02:08.550753 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgq4f\" (UniqueName: \"kubernetes.io/projected/654849e1-fb87-435b-9e65-c6f93143a639-kube-api-access-hgq4f\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:09 crc kubenswrapper[5058]: I1014 09:02:09.025499 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-54zzh" event={"ID":"654849e1-fb87-435b-9e65-c6f93143a639","Type":"ContainerDied","Data":"59b7ee159579d29107c9059cefb1824472aa2818d15716206b9c3b126670ca58"} Oct 14 09:02:09 crc kubenswrapper[5058]: I1014 09:02:09.025546 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b7ee159579d29107c9059cefb1824472aa2818d15716206b9c3b126670ca58" Oct 14 09:02:09 crc kubenswrapper[5058]: I1014 09:02:09.025596 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zzh" Oct 14 09:02:09 crc kubenswrapper[5058]: I1014 09:02:09.474489 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:02:09 crc kubenswrapper[5058]: I1014 09:02:09.474559 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:02:09 crc kubenswrapper[5058]: I1014 09:02:09.525606 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:02:10 crc kubenswrapper[5058]: I1014 09:02:10.106763 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:02:10 crc kubenswrapper[5058]: I1014 09:02:10.190518 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzg9h"] Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.054458 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jzg9h" podUID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerName="registry-server" containerID="cri-o://72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9" gracePeriod=2 Oct 14 09:02:12 crc kubenswrapper[5058]: E1014 09:02:12.258897 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc30bca_3508_4e69_be81_48f9bc7b4d14.slice/crio-72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc30bca_3508_4e69_be81_48f9bc7b4d14.slice/crio-conmon-72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9.scope\": RecentStats: unable to find data in memory cache]" Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.557372 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.645600 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-utilities\") pod \"adc30bca-3508-4e69-be81-48f9bc7b4d14\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.645667 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vp6p\" (UniqueName: \"kubernetes.io/projected/adc30bca-3508-4e69-be81-48f9bc7b4d14-kube-api-access-2vp6p\") pod \"adc30bca-3508-4e69-be81-48f9bc7b4d14\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.645696 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-catalog-content\") pod \"adc30bca-3508-4e69-be81-48f9bc7b4d14\" (UID: \"adc30bca-3508-4e69-be81-48f9bc7b4d14\") " Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.646511 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-utilities" (OuterVolumeSpecName: "utilities") pod "adc30bca-3508-4e69-be81-48f9bc7b4d14" (UID: "adc30bca-3508-4e69-be81-48f9bc7b4d14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.651038 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc30bca-3508-4e69-be81-48f9bc7b4d14-kube-api-access-2vp6p" (OuterVolumeSpecName: "kube-api-access-2vp6p") pod "adc30bca-3508-4e69-be81-48f9bc7b4d14" (UID: "adc30bca-3508-4e69-be81-48f9bc7b4d14"). InnerVolumeSpecName "kube-api-access-2vp6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.706498 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adc30bca-3508-4e69-be81-48f9bc7b4d14" (UID: "adc30bca-3508-4e69-be81-48f9bc7b4d14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.748429 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.748487 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vp6p\" (UniqueName: \"kubernetes.io/projected/adc30bca-3508-4e69-be81-48f9bc7b4d14-kube-api-access-2vp6p\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:12 crc kubenswrapper[5058]: I1014 09:02:12.748508 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc30bca-3508-4e69-be81-48f9bc7b4d14-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.067040 5058 generic.go:334] "Generic (PLEG): container finished" podID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerID="72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9" exitCode=0 Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.067105 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzg9h" event={"ID":"adc30bca-3508-4e69-be81-48f9bc7b4d14","Type":"ContainerDied","Data":"72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9"} Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.067145 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzg9h" event={"ID":"adc30bca-3508-4e69-be81-48f9bc7b4d14","Type":"ContainerDied","Data":"221eb59c535588d61d1971b661b75e410acf90a051aac486a5936f73224d62d7"} Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.067173 5058 scope.go:117] "RemoveContainer" containerID="72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.067164 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzg9h" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.097885 5058 scope.go:117] "RemoveContainer" containerID="17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.103082 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzg9h"] Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.116340 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jzg9h"] Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.132335 5058 scope.go:117] "RemoveContainer" containerID="24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.186244 5058 scope.go:117] "RemoveContainer" containerID="72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9" Oct 14 09:02:13 crc kubenswrapper[5058]: E1014 09:02:13.186671 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9\": container with ID starting with 72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9 not found: ID does not exist" containerID="72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.186729 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9"} err="failed to get container status \"72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9\": rpc error: code = NotFound desc = could not find container \"72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9\": container with ID starting with 72e3f5bb2110a6d25efad5dc900d8c75709866abd832a6fcbe9b79b8ebc6eca9 not found: ID does not exist" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.186763 5058 scope.go:117] "RemoveContainer" containerID="17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4" Oct 14 09:02:13 crc kubenswrapper[5058]: E1014 09:02:13.187336 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4\": container with ID starting with 17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4 not found: ID does not exist" containerID="17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.187383 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4"} err="failed to get container status \"17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4\": rpc error: code = NotFound desc = could not find container \"17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4\": container with ID starting with 17394b789bb5c413ac8da1975a40d0a4ce720bdc62278a292d77f978561de6c4 not found: ID does not exist" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.187416 5058 scope.go:117] "RemoveContainer" containerID="24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8" Oct 14 09:02:13 crc kubenswrapper[5058]: E1014 09:02:13.187723 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8\": container with ID starting with 24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8 not found: ID does not exist" containerID="24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8" Oct 14 09:02:13 crc kubenswrapper[5058]: I1014 09:02:13.187757 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8"} err="failed to get container status \"24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8\": rpc error: code = NotFound desc = could not find container \"24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8\": container with ID starting with 24c83a7a22a1407cddea2617811ca6a3703ded60383a77220ce0886ab9f2dca8 not found: ID does not exist" Oct 14 09:02:14 crc kubenswrapper[5058]: I1014 09:02:14.790377 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:02:14 crc kubenswrapper[5058]: E1014 09:02:14.790838 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:02:14 crc kubenswrapper[5058]: I1014 09:02:14.810084 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc30bca-3508-4e69-be81-48f9bc7b4d14" path="/var/lib/kubelet/pods/adc30bca-3508-4e69-be81-48f9bc7b4d14/volumes" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.202924 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fcbb-account-create-pt9ld"] Oct 14 09:02:15 crc kubenswrapper[5058]: E1014 09:02:15.203779 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654849e1-fb87-435b-9e65-c6f93143a639" containerName="mariadb-database-create" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.203874 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="654849e1-fb87-435b-9e65-c6f93143a639" containerName="mariadb-database-create" Oct 14 09:02:15 crc kubenswrapper[5058]: E1014 09:02:15.203943 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerName="extract-utilities" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.203993 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerName="extract-utilities" Oct 14 09:02:15 crc kubenswrapper[5058]: E1014 09:02:15.204051 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerName="registry-server" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.204108 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerName="registry-server" Oct 14 09:02:15 crc kubenswrapper[5058]: E1014 09:02:15.204168 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerName="extract-content" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.204226 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerName="extract-content" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.204460 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="654849e1-fb87-435b-9e65-c6f93143a639" containerName="mariadb-database-create" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.204523 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc30bca-3508-4e69-be81-48f9bc7b4d14" containerName="registry-server" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.205206 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fcbb-account-create-pt9ld" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.208154 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.212985 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fcbb-account-create-pt9ld"] Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.338953 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bgh\" (UniqueName: \"kubernetes.io/projected/936ed4d4-9a04-4d09-b93d-e37419454c38-kube-api-access-g5bgh\") pod \"cinder-fcbb-account-create-pt9ld\" (UID: \"936ed4d4-9a04-4d09-b93d-e37419454c38\") " pod="openstack/cinder-fcbb-account-create-pt9ld" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.440497 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5bgh\" (UniqueName: \"kubernetes.io/projected/936ed4d4-9a04-4d09-b93d-e37419454c38-kube-api-access-g5bgh\") pod \"cinder-fcbb-account-create-pt9ld\" (UID: \"936ed4d4-9a04-4d09-b93d-e37419454c38\") " pod="openstack/cinder-fcbb-account-create-pt9ld" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.463829 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5bgh\" (UniqueName: \"kubernetes.io/projected/936ed4d4-9a04-4d09-b93d-e37419454c38-kube-api-access-g5bgh\") pod \"cinder-fcbb-account-create-pt9ld\" (UID: \"936ed4d4-9a04-4d09-b93d-e37419454c38\") " pod="openstack/cinder-fcbb-account-create-pt9ld" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.529660 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fcbb-account-create-pt9ld" Oct 14 09:02:15 crc kubenswrapper[5058]: I1014 09:02:15.865422 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fcbb-account-create-pt9ld"] Oct 14 09:02:16 crc kubenswrapper[5058]: I1014 09:02:16.097757 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fcbb-account-create-pt9ld" event={"ID":"936ed4d4-9a04-4d09-b93d-e37419454c38","Type":"ContainerStarted","Data":"5d39a317798c6675328a3c7a8471f1c21cafaa388ffc98d402efe4a346c1e2b9"} Oct 14 09:02:16 crc kubenswrapper[5058]: I1014 09:02:16.098117 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fcbb-account-create-pt9ld" event={"ID":"936ed4d4-9a04-4d09-b93d-e37419454c38","Type":"ContainerStarted","Data":"331f9f0d19174234fba665cab75f8530f54d682caa5fc05d3035202feb666248"} Oct 14 09:02:16 crc kubenswrapper[5058]: I1014 09:02:16.122928 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fcbb-account-create-pt9ld" podStartSLOduration=1.122900555 podStartE2EDuration="1.122900555s" podCreationTimestamp="2025-10-14 09:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:02:16.11609403 +0000 UTC m=+8084.027177886" watchObservedRunningTime="2025-10-14 09:02:16.122900555 +0000 UTC m=+8084.033984401" Oct 14 09:02:17 crc kubenswrapper[5058]: I1014 09:02:17.106977 5058 generic.go:334] "Generic (PLEG): container finished" podID="936ed4d4-9a04-4d09-b93d-e37419454c38" containerID="5d39a317798c6675328a3c7a8471f1c21cafaa388ffc98d402efe4a346c1e2b9" exitCode=0 Oct 14 09:02:17 crc kubenswrapper[5058]: I1014 09:02:17.107020 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fcbb-account-create-pt9ld" event={"ID":"936ed4d4-9a04-4d09-b93d-e37419454c38","Type":"ContainerDied","Data":"5d39a317798c6675328a3c7a8471f1c21cafaa388ffc98d402efe4a346c1e2b9"} Oct 14 09:02:18 crc kubenswrapper[5058]: I1014 09:02:18.603460 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fcbb-account-create-pt9ld" Oct 14 09:02:18 crc kubenswrapper[5058]: I1014 09:02:18.705815 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5bgh\" (UniqueName: \"kubernetes.io/projected/936ed4d4-9a04-4d09-b93d-e37419454c38-kube-api-access-g5bgh\") pod \"936ed4d4-9a04-4d09-b93d-e37419454c38\" (UID: \"936ed4d4-9a04-4d09-b93d-e37419454c38\") " Oct 14 09:02:18 crc kubenswrapper[5058]: I1014 09:02:18.712765 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/936ed4d4-9a04-4d09-b93d-e37419454c38-kube-api-access-g5bgh" (OuterVolumeSpecName: "kube-api-access-g5bgh") pod "936ed4d4-9a04-4d09-b93d-e37419454c38" (UID: "936ed4d4-9a04-4d09-b93d-e37419454c38"). InnerVolumeSpecName "kube-api-access-g5bgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:02:18 crc kubenswrapper[5058]: I1014 09:02:18.811369 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5bgh\" (UniqueName: \"kubernetes.io/projected/936ed4d4-9a04-4d09-b93d-e37419454c38-kube-api-access-g5bgh\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:19 crc kubenswrapper[5058]: I1014 09:02:19.138994 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fcbb-account-create-pt9ld" event={"ID":"936ed4d4-9a04-4d09-b93d-e37419454c38","Type":"ContainerDied","Data":"331f9f0d19174234fba665cab75f8530f54d682caa5fc05d3035202feb666248"} Oct 14 09:02:19 crc kubenswrapper[5058]: I1014 09:02:19.139391 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="331f9f0d19174234fba665cab75f8530f54d682caa5fc05d3035202feb666248" Oct 14 09:02:19 crc kubenswrapper[5058]: I1014 09:02:19.139496 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fcbb-account-create-pt9ld" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.490190 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5h4hc"] Oct 14 09:02:20 crc kubenswrapper[5058]: E1014 09:02:20.490630 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936ed4d4-9a04-4d09-b93d-e37419454c38" containerName="mariadb-account-create" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.490644 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="936ed4d4-9a04-4d09-b93d-e37419454c38" containerName="mariadb-account-create" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.490955 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="936ed4d4-9a04-4d09-b93d-e37419454c38" containerName="mariadb-account-create" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.491638 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.494566 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tv2l9" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.495058 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.495326 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.504543 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5h4hc"] Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.550348 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w9qf\" (UniqueName: \"kubernetes.io/projected/84942974-990b-4917-805c-681c6c6038bf-kube-api-access-8w9qf\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.550416 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84942974-990b-4917-805c-681c6c6038bf-etc-machine-id\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.550472 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-combined-ca-bundle\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.550502 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-scripts\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.550538 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-db-sync-config-data\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.550649 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-config-data\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.652234 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-config-data\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.652272 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w9qf\" (UniqueName: \"kubernetes.io/projected/84942974-990b-4917-805c-681c6c6038bf-kube-api-access-8w9qf\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.652303 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84942974-990b-4917-805c-681c6c6038bf-etc-machine-id\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.652340 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-combined-ca-bundle\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.652363 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-scripts\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.652392 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-db-sync-config-data\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.652407 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84942974-990b-4917-805c-681c6c6038bf-etc-machine-id\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.657521 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-db-sync-config-data\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.658211 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-config-data\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.660951 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-combined-ca-bundle\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.667263 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-scripts\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.675912 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w9qf\" (UniqueName: \"kubernetes.io/projected/84942974-990b-4917-805c-681c6c6038bf-kube-api-access-8w9qf\") pod \"cinder-db-sync-5h4hc\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:20 crc kubenswrapper[5058]: I1014 09:02:20.844073 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:21 crc kubenswrapper[5058]: I1014 09:02:21.367863 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5h4hc"] Oct 14 09:02:22 crc kubenswrapper[5058]: I1014 09:02:22.170983 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5h4hc" event={"ID":"84942974-990b-4917-805c-681c6c6038bf","Type":"ContainerStarted","Data":"8de50531165b31c7cd219c2972daceabc119fc11ba9fe33b074d0a8842efb4fa"} Oct 14 09:02:25 crc kubenswrapper[5058]: I1014 09:02:25.790637 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:02:25 crc kubenswrapper[5058]: E1014 09:02:25.791757 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:02:39 crc kubenswrapper[5058]: I1014 09:02:39.790561 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:02:40 crc kubenswrapper[5058]: I1014 09:02:40.344224 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5h4hc" event={"ID":"84942974-990b-4917-805c-681c6c6038bf","Type":"ContainerStarted","Data":"7df15d0b93cff7d5f6f8bbd495e0aed17ad71fdd48d38837792442db1bdd9850"} Oct 14 09:02:40 crc kubenswrapper[5058]: I1014 09:02:40.353662 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"981e068c2e90f44df2f90325ad6a7c204ea7b27ef02d846b87dabb2374178337"} Oct 14 09:02:40 crc kubenswrapper[5058]: I1014 09:02:40.361026 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5h4hc" podStartSLOduration=2.358330924 podStartE2EDuration="20.361003967s" podCreationTimestamp="2025-10-14 09:02:20 +0000 UTC" firstStartedPulling="2025-10-14 09:02:21.384053968 +0000 UTC m=+8089.295137804" lastFinishedPulling="2025-10-14 09:02:39.386727031 +0000 UTC m=+8107.297810847" observedRunningTime="2025-10-14 09:02:40.360883094 +0000 UTC m=+8108.271966930" watchObservedRunningTime="2025-10-14 09:02:40.361003967 +0000 UTC m=+8108.272087803" Oct 14 09:02:42 crc kubenswrapper[5058]: I1014 09:02:42.409759 5058 scope.go:117] "RemoveContainer" containerID="b25a4b0eaddbadefd46098ceb76c34825a703ebec450a39f4a7cbeed766bfdc7" Oct 14 09:02:42 crc kubenswrapper[5058]: I1014 09:02:42.443638 5058 scope.go:117] "RemoveContainer" containerID="6aeba1b43e5a5d2e7d8b0384a4d51ffda4f906c29251e649976518c999e290eb" Oct 14 09:02:42 crc kubenswrapper[5058]: I1014 09:02:42.483509 5058 scope.go:117] "RemoveContainer" containerID="77c5bd35027c641ba133d4895b893d8b41462efdc8b9bf9ae2595815f0d365a0" Oct 14 09:02:43 crc kubenswrapper[5058]: I1014 09:02:43.432946 5058 generic.go:334] "Generic (PLEG): container finished" podID="84942974-990b-4917-805c-681c6c6038bf" containerID="7df15d0b93cff7d5f6f8bbd495e0aed17ad71fdd48d38837792442db1bdd9850" exitCode=0 Oct 14 09:02:43 crc kubenswrapper[5058]: I1014 09:02:43.433067 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5h4hc" event={"ID":"84942974-990b-4917-805c-681c6c6038bf","Type":"ContainerDied","Data":"7df15d0b93cff7d5f6f8bbd495e0aed17ad71fdd48d38837792442db1bdd9850"} Oct 14 09:02:44 crc kubenswrapper[5058]: I1014 09:02:44.875883 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.065590 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-combined-ca-bundle\") pod \"84942974-990b-4917-805c-681c6c6038bf\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.065915 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-config-data\") pod \"84942974-990b-4917-805c-681c6c6038bf\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.065949 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-db-sync-config-data\") pod \"84942974-990b-4917-805c-681c6c6038bf\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.065979 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84942974-990b-4917-805c-681c6c6038bf-etc-machine-id\") pod \"84942974-990b-4917-805c-681c6c6038bf\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.066043 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w9qf\" (UniqueName: \"kubernetes.io/projected/84942974-990b-4917-805c-681c6c6038bf-kube-api-access-8w9qf\") pod \"84942974-990b-4917-805c-681c6c6038bf\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.066126 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-scripts\") pod \"84942974-990b-4917-805c-681c6c6038bf\" (UID: \"84942974-990b-4917-805c-681c6c6038bf\") " Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.066391 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84942974-990b-4917-805c-681c6c6038bf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "84942974-990b-4917-805c-681c6c6038bf" (UID: "84942974-990b-4917-805c-681c6c6038bf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.066935 5058 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84942974-990b-4917-805c-681c6c6038bf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.071081 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-scripts" (OuterVolumeSpecName: "scripts") pod "84942974-990b-4917-805c-681c6c6038bf" (UID: "84942974-990b-4917-805c-681c6c6038bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.071585 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "84942974-990b-4917-805c-681c6c6038bf" (UID: "84942974-990b-4917-805c-681c6c6038bf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.073909 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84942974-990b-4917-805c-681c6c6038bf-kube-api-access-8w9qf" (OuterVolumeSpecName: "kube-api-access-8w9qf") pod "84942974-990b-4917-805c-681c6c6038bf" (UID: "84942974-990b-4917-805c-681c6c6038bf"). InnerVolumeSpecName "kube-api-access-8w9qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.092516 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84942974-990b-4917-805c-681c6c6038bf" (UID: "84942974-990b-4917-805c-681c6c6038bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.131523 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-config-data" (OuterVolumeSpecName: "config-data") pod "84942974-990b-4917-805c-681c6c6038bf" (UID: "84942974-990b-4917-805c-681c6c6038bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.168480 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.168515 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.168527 5058 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.168539 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w9qf\" (UniqueName: \"kubernetes.io/projected/84942974-990b-4917-805c-681c6c6038bf-kube-api-access-8w9qf\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.168551 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84942974-990b-4917-805c-681c6c6038bf-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.457025 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5h4hc" event={"ID":"84942974-990b-4917-805c-681c6c6038bf","Type":"ContainerDied","Data":"8de50531165b31c7cd219c2972daceabc119fc11ba9fe33b074d0a8842efb4fa"} Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.457353 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de50531165b31c7cd219c2972daceabc119fc11ba9fe33b074d0a8842efb4fa" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.457162 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5h4hc" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.925467 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7976679757-vfnsv"] Oct 14 09:02:45 crc kubenswrapper[5058]: E1014 09:02:45.926058 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84942974-990b-4917-805c-681c6c6038bf" containerName="cinder-db-sync" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.926069 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="84942974-990b-4917-805c-681c6c6038bf" containerName="cinder-db-sync" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.926250 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="84942974-990b-4917-805c-681c6c6038bf" containerName="cinder-db-sync" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.927130 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.940836 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7976679757-vfnsv"] Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.998165 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5vxv\" (UniqueName: \"kubernetes.io/projected/9512bdc2-4a78-401e-a9a6-f0c7da207183-kube-api-access-s5vxv\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.998246 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-config\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.998278 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-nb\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.998309 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-dns-svc\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:45 crc kubenswrapper[5058]: I1014 09:02:45.998347 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-sb\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.082024 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.083817 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.086872 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.087070 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.087344 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.087487 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tv2l9" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.100026 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-nb\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.100080 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-dns-svc\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.100127 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-sb\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.100166 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5vxv\" (UniqueName: \"kubernetes.io/projected/9512bdc2-4a78-401e-a9a6-f0c7da207183-kube-api-access-s5vxv\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.100226 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-config\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.101050 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-config\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.101574 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-nb\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.102361 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-sb\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.102399 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-dns-svc\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.103691 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.125749 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5vxv\" (UniqueName: \"kubernetes.io/projected/9512bdc2-4a78-401e-a9a6-f0c7da207183-kube-api-access-s5vxv\") pod \"dnsmasq-dns-7976679757-vfnsv\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.202125 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.202170 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f414c34-b78b-4222-8177-7c60383c2089-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.202190 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-scripts\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.202215 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f414c34-b78b-4222-8177-7c60383c2089-logs\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.202249 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.202279 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88b9\" (UniqueName: \"kubernetes.io/projected/6f414c34-b78b-4222-8177-7c60383c2089-kube-api-access-c88b9\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.202471 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.251643 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.309390 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88b9\" (UniqueName: \"kubernetes.io/projected/6f414c34-b78b-4222-8177-7c60383c2089-kube-api-access-c88b9\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.309452 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.309528 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.309551 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f414c34-b78b-4222-8177-7c60383c2089-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.309570 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-scripts\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.309589 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f414c34-b78b-4222-8177-7c60383c2089-logs\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.309621 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.314115 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f414c34-b78b-4222-8177-7c60383c2089-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.314565 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f414c34-b78b-4222-8177-7c60383c2089-logs\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.316632 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.319267 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-scripts\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.328261 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.332724 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.345660 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88b9\" (UniqueName: \"kubernetes.io/projected/6f414c34-b78b-4222-8177-7c60383c2089-kube-api-access-c88b9\") pod \"cinder-api-0\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.412807 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.705819 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7976679757-vfnsv"] Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.723743 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rjcfw"] Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.725824 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.754650 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjcfw"] Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.826115 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlwdb\" (UniqueName: \"kubernetes.io/projected/1e30bb4e-3434-4362-8c95-faf16b1d60bb-kube-api-access-nlwdb\") pod \"redhat-marketplace-rjcfw\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.826201 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-utilities\") pod \"redhat-marketplace-rjcfw\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.826363 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-catalog-content\") pod \"redhat-marketplace-rjcfw\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.876234 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 09:02:46 crc kubenswrapper[5058]: W1014 09:02:46.878576 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f414c34_b78b_4222_8177_7c60383c2089.slice/crio-0f8d0dede41e63b9b3ec6abf04e6c08aad7ef88ef3c259c8de1cfd8369e859c5 WatchSource:0}: Error finding container 0f8d0dede41e63b9b3ec6abf04e6c08aad7ef88ef3c259c8de1cfd8369e859c5: Status 404 returned error can't find the container with id 0f8d0dede41e63b9b3ec6abf04e6c08aad7ef88ef3c259c8de1cfd8369e859c5 Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.928966 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-catalog-content\") pod \"redhat-marketplace-rjcfw\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.929041 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlwdb\" (UniqueName: \"kubernetes.io/projected/1e30bb4e-3434-4362-8c95-faf16b1d60bb-kube-api-access-nlwdb\") pod \"redhat-marketplace-rjcfw\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.929088 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-utilities\") pod \"redhat-marketplace-rjcfw\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.929420 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-catalog-content\") pod \"redhat-marketplace-rjcfw\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.929446 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-utilities\") pod \"redhat-marketplace-rjcfw\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:46 crc kubenswrapper[5058]: I1014 09:02:46.951510 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlwdb\" (UniqueName: \"kubernetes.io/projected/1e30bb4e-3434-4362-8c95-faf16b1d60bb-kube-api-access-nlwdb\") pod \"redhat-marketplace-rjcfw\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:47 crc kubenswrapper[5058]: I1014 09:02:47.050997 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:47 crc kubenswrapper[5058]: I1014 09:02:47.483619 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f414c34-b78b-4222-8177-7c60383c2089","Type":"ContainerStarted","Data":"0f8d0dede41e63b9b3ec6abf04e6c08aad7ef88ef3c259c8de1cfd8369e859c5"} Oct 14 09:02:47 crc kubenswrapper[5058]: I1014 09:02:47.485550 5058 generic.go:334] "Generic (PLEG): container finished" podID="9512bdc2-4a78-401e-a9a6-f0c7da207183" containerID="38f170261852408abb68bbfe9e248849b9f2a88cbbd6deeb218a2543421b6928" exitCode=0 Oct 14 09:02:47 crc kubenswrapper[5058]: I1014 09:02:47.485597 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976679757-vfnsv" event={"ID":"9512bdc2-4a78-401e-a9a6-f0c7da207183","Type":"ContainerDied","Data":"38f170261852408abb68bbfe9e248849b9f2a88cbbd6deeb218a2543421b6928"} Oct 14 09:02:47 crc kubenswrapper[5058]: I1014 09:02:47.485622 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976679757-vfnsv" event={"ID":"9512bdc2-4a78-401e-a9a6-f0c7da207183","Type":"ContainerStarted","Data":"7789c9b81b88c38f6615228739b95c350b13cf1c063bab9c578354123554bcd9"} Oct 14 09:02:47 crc kubenswrapper[5058]: I1014 09:02:47.528219 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjcfw"] Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.498007 5058 generic.go:334] "Generic (PLEG): container finished" podID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerID="b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93" exitCode=0 Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.498064 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjcfw" event={"ID":"1e30bb4e-3434-4362-8c95-faf16b1d60bb","Type":"ContainerDied","Data":"b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93"} Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.498667 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjcfw" event={"ID":"1e30bb4e-3434-4362-8c95-faf16b1d60bb","Type":"ContainerStarted","Data":"8a04c2fe3adbf479aaa53326eb7be053ace66f9edaf5192b4269679a9ad1d646"} Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.503287 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f414c34-b78b-4222-8177-7c60383c2089","Type":"ContainerStarted","Data":"4272392ea59d88f88351e0aceec5fefde5c4189a9b2cab6183b5f780dd421623"} Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.503361 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f414c34-b78b-4222-8177-7c60383c2089","Type":"ContainerStarted","Data":"0fd60207141a65747939fb50726ea8db3d6173b9442934a2a1eb7bef73a2d0fb"} Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.503447 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.505638 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976679757-vfnsv" event={"ID":"9512bdc2-4a78-401e-a9a6-f0c7da207183","Type":"ContainerStarted","Data":"c071a029db34610e11b3ad742f349ed825f1b071a4c4b0c7ffa2c4422c9e8d85"} Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.505875 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.551908 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7976679757-vfnsv" podStartSLOduration=3.551888767 podStartE2EDuration="3.551888767s" podCreationTimestamp="2025-10-14 09:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:02:48.550473357 +0000 UTC m=+8116.461557243" watchObservedRunningTime="2025-10-14 09:02:48.551888767 +0000 UTC m=+8116.462972573" Oct 14 09:02:48 crc kubenswrapper[5058]: I1014 09:02:48.582977 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.582946275 podStartE2EDuration="2.582946275s" podCreationTimestamp="2025-10-14 09:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:02:48.57787081 +0000 UTC m=+8116.488954676" watchObservedRunningTime="2025-10-14 09:02:48.582946275 +0000 UTC m=+8116.494030121" Oct 14 09:02:50 crc kubenswrapper[5058]: I1014 09:02:50.531322 5058 generic.go:334] "Generic (PLEG): container finished" podID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerID="d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f" exitCode=0 Oct 14 09:02:50 crc kubenswrapper[5058]: I1014 09:02:50.531405 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjcfw" event={"ID":"1e30bb4e-3434-4362-8c95-faf16b1d60bb","Type":"ContainerDied","Data":"d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f"} Oct 14 09:02:51 crc kubenswrapper[5058]: I1014 09:02:51.556828 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjcfw" event={"ID":"1e30bb4e-3434-4362-8c95-faf16b1d60bb","Type":"ContainerStarted","Data":"f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0"} Oct 14 09:02:51 crc kubenswrapper[5058]: I1014 09:02:51.592399 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rjcfw" podStartSLOduration=2.8606698760000002 podStartE2EDuration="5.592373279s" podCreationTimestamp="2025-10-14 09:02:46 +0000 UTC" firstStartedPulling="2025-10-14 09:02:48.500431426 +0000 UTC m=+8116.411515232" lastFinishedPulling="2025-10-14 09:02:51.232134799 +0000 UTC m=+8119.143218635" observedRunningTime="2025-10-14 09:02:51.580051487 +0000 UTC m=+8119.491135333" watchObservedRunningTime="2025-10-14 09:02:51.592373279 +0000 UTC m=+8119.503457115" Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.254026 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.357047 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76b75cc767-7jsdd"] Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.357660 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" podUID="c91d0e94-33c8-46be-ae31-1191e59b3dd0" containerName="dnsmasq-dns" containerID="cri-o://0b3b2dde6364631ea68b96ed909ba864efb6ae004e0264df5672f9ac4f602048" gracePeriod=10 Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.634900 5058 generic.go:334] "Generic (PLEG): container finished" podID="c91d0e94-33c8-46be-ae31-1191e59b3dd0" containerID="0b3b2dde6364631ea68b96ed909ba864efb6ae004e0264df5672f9ac4f602048" exitCode=0 Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.634942 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" event={"ID":"c91d0e94-33c8-46be-ae31-1191e59b3dd0","Type":"ContainerDied","Data":"0b3b2dde6364631ea68b96ed909ba864efb6ae004e0264df5672f9ac4f602048"} Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.907301 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.949260 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-config\") pod \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.949315 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-sb\") pod \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.949353 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-dns-svc\") pod \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.949375 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x54p7\" (UniqueName: \"kubernetes.io/projected/c91d0e94-33c8-46be-ae31-1191e59b3dd0-kube-api-access-x54p7\") pod \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.949461 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-nb\") pod \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\" (UID: \"c91d0e94-33c8-46be-ae31-1191e59b3dd0\") " Oct 14 09:02:56 crc kubenswrapper[5058]: I1014 09:02:56.955588 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c91d0e94-33c8-46be-ae31-1191e59b3dd0-kube-api-access-x54p7" (OuterVolumeSpecName: "kube-api-access-x54p7") pod "c91d0e94-33c8-46be-ae31-1191e59b3dd0" (UID: "c91d0e94-33c8-46be-ae31-1191e59b3dd0"). InnerVolumeSpecName "kube-api-access-x54p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.038829 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c91d0e94-33c8-46be-ae31-1191e59b3dd0" (UID: "c91d0e94-33c8-46be-ae31-1191e59b3dd0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.044785 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c91d0e94-33c8-46be-ae31-1191e59b3dd0" (UID: "c91d0e94-33c8-46be-ae31-1191e59b3dd0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.051946 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.052088 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.054495 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.054527 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.054538 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x54p7\" (UniqueName: \"kubernetes.io/projected/c91d0e94-33c8-46be-ae31-1191e59b3dd0-kube-api-access-x54p7\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.057714 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-config" (OuterVolumeSpecName: "config") pod "c91d0e94-33c8-46be-ae31-1191e59b3dd0" (UID: "c91d0e94-33c8-46be-ae31-1191e59b3dd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.058204 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c91d0e94-33c8-46be-ae31-1191e59b3dd0" (UID: "c91d0e94-33c8-46be-ae31-1191e59b3dd0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.100335 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.156021 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.156054 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c91d0e94-33c8-46be-ae31-1191e59b3dd0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.659023 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.659144 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b75cc767-7jsdd" event={"ID":"c91d0e94-33c8-46be-ae31-1191e59b3dd0","Type":"ContainerDied","Data":"22f715c61865c3fec6820ee80d27973261eea353757e127a011b8b6f79c96d0e"} Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.659415 5058 scope.go:117] "RemoveContainer" containerID="0b3b2dde6364631ea68b96ed909ba864efb6ae004e0264df5672f9ac4f602048" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.705910 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76b75cc767-7jsdd"] Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.710264 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76b75cc767-7jsdd"] Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.711900 5058 scope.go:117] "RemoveContainer" containerID="c4d0ad77d6b3a7f60820e035530099e136a763e49317b49905571c4ae04c696a" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.731747 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:02:57 crc kubenswrapper[5058]: I1014 09:02:57.796218 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjcfw"] Oct 14 09:02:58 crc kubenswrapper[5058]: I1014 09:02:58.171149 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 14 09:02:58 crc kubenswrapper[5058]: I1014 09:02:58.804049 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c91d0e94-33c8-46be-ae31-1191e59b3dd0" path="/var/lib/kubelet/pods/c91d0e94-33c8-46be-ae31-1191e59b3dd0/volumes" Oct 14 09:02:59 crc kubenswrapper[5058]: I1014 09:02:59.678278 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rjcfw" podUID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerName="registry-server" containerID="cri-o://f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0" gracePeriod=2 Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.239508 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.418664 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-utilities\") pod \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.419099 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlwdb\" (UniqueName: \"kubernetes.io/projected/1e30bb4e-3434-4362-8c95-faf16b1d60bb-kube-api-access-nlwdb\") pod \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.419185 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-catalog-content\") pod \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\" (UID: \"1e30bb4e-3434-4362-8c95-faf16b1d60bb\") " Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.419466 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-utilities" (OuterVolumeSpecName: "utilities") pod "1e30bb4e-3434-4362-8c95-faf16b1d60bb" (UID: "1e30bb4e-3434-4362-8c95-faf16b1d60bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.420111 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.442126 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e30bb4e-3434-4362-8c95-faf16b1d60bb-kube-api-access-nlwdb" (OuterVolumeSpecName: "kube-api-access-nlwdb") pod "1e30bb4e-3434-4362-8c95-faf16b1d60bb" (UID: "1e30bb4e-3434-4362-8c95-faf16b1d60bb"). InnerVolumeSpecName "kube-api-access-nlwdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.444210 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e30bb4e-3434-4362-8c95-faf16b1d60bb" (UID: "1e30bb4e-3434-4362-8c95-faf16b1d60bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.521628 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlwdb\" (UniqueName: \"kubernetes.io/projected/1e30bb4e-3434-4362-8c95-faf16b1d60bb-kube-api-access-nlwdb\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.521663 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e30bb4e-3434-4362-8c95-faf16b1d60bb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.691339 5058 generic.go:334] "Generic (PLEG): container finished" podID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerID="f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0" exitCode=0 Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.691545 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjcfw" event={"ID":"1e30bb4e-3434-4362-8c95-faf16b1d60bb","Type":"ContainerDied","Data":"f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0"} Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.691705 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjcfw" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.691938 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjcfw" event={"ID":"1e30bb4e-3434-4362-8c95-faf16b1d60bb","Type":"ContainerDied","Data":"8a04c2fe3adbf479aaa53326eb7be053ace66f9edaf5192b4269679a9ad1d646"} Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.691992 5058 scope.go:117] "RemoveContainer" containerID="f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.714197 5058 scope.go:117] "RemoveContainer" containerID="d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.749906 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjcfw"] Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.751387 5058 scope.go:117] "RemoveContainer" containerID="b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.759583 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjcfw"] Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.772919 5058 scope.go:117] "RemoveContainer" containerID="f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0" Oct 14 09:03:00 crc kubenswrapper[5058]: E1014 09:03:00.773404 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0\": container with ID starting with f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0 not found: ID does not exist" containerID="f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.773479 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0"} err="failed to get container status \"f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0\": rpc error: code = NotFound desc = could not find container \"f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0\": container with ID starting with f54a3a551cb124bf7dbe8696e9227f691685469b84b227ebc5cd18012eb2c8f0 not found: ID does not exist" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.773537 5058 scope.go:117] "RemoveContainer" containerID="d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f" Oct 14 09:03:00 crc kubenswrapper[5058]: E1014 09:03:00.773920 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f\": container with ID starting with d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f not found: ID does not exist" containerID="d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.773976 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f"} err="failed to get container status \"d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f\": rpc error: code = NotFound desc = could not find container \"d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f\": container with ID starting with d7cba6e0315bd5dfa9ad3077560ea715abf9a6d36d1aa29285a2708a75b52e3f not found: ID does not exist" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.774008 5058 scope.go:117] "RemoveContainer" containerID="b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93" Oct 14 09:03:00 crc kubenswrapper[5058]: E1014 09:03:00.774603 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93\": container with ID starting with b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93 not found: ID does not exist" containerID="b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.774641 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93"} err="failed to get container status \"b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93\": rpc error: code = NotFound desc = could not find container \"b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93\": container with ID starting with b5c0e35a41d85d6dcd27d67d62a284d4047a6da83a5766147f0698614e1a6a93 not found: ID does not exist" Oct 14 09:03:00 crc kubenswrapper[5058]: I1014 09:03:00.802586 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" path="/var/lib/kubelet/pods/1e30bb4e-3434-4362-8c95-faf16b1d60bb/volumes" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.244613 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6bqjh"] Oct 14 09:03:10 crc kubenswrapper[5058]: E1014 09:03:10.246008 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91d0e94-33c8-46be-ae31-1191e59b3dd0" containerName="dnsmasq-dns" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.246034 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91d0e94-33c8-46be-ae31-1191e59b3dd0" containerName="dnsmasq-dns" Oct 14 09:03:10 crc kubenswrapper[5058]: E1014 09:03:10.246067 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerName="extract-utilities" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.246079 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerName="extract-utilities" Oct 14 09:03:10 crc kubenswrapper[5058]: E1014 09:03:10.246108 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91d0e94-33c8-46be-ae31-1191e59b3dd0" containerName="init" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.246122 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91d0e94-33c8-46be-ae31-1191e59b3dd0" containerName="init" Oct 14 09:03:10 crc kubenswrapper[5058]: E1014 09:03:10.246153 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerName="registry-server" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.246165 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerName="registry-server" Oct 14 09:03:10 crc kubenswrapper[5058]: E1014 09:03:10.246189 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerName="extract-content" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.246202 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerName="extract-content" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.246520 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e30bb4e-3434-4362-8c95-faf16b1d60bb" containerName="registry-server" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.246554 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91d0e94-33c8-46be-ae31-1191e59b3dd0" containerName="dnsmasq-dns" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.248897 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.258143 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bqjh"] Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.343958 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-catalog-content\") pod \"certified-operators-6bqjh\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.344427 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-utilities\") pod \"certified-operators-6bqjh\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.344870 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbtn4\" (UniqueName: \"kubernetes.io/projected/90449325-1b0e-43d5-8c95-157db8ef542b-kube-api-access-lbtn4\") pod \"certified-operators-6bqjh\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.447107 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-utilities\") pod \"certified-operators-6bqjh\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.447324 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbtn4\" (UniqueName: \"kubernetes.io/projected/90449325-1b0e-43d5-8c95-157db8ef542b-kube-api-access-lbtn4\") pod \"certified-operators-6bqjh\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.447472 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-catalog-content\") pod \"certified-operators-6bqjh\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.448505 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-utilities\") pod \"certified-operators-6bqjh\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.449252 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-catalog-content\") pod \"certified-operators-6bqjh\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.471323 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbtn4\" (UniqueName: \"kubernetes.io/projected/90449325-1b0e-43d5-8c95-157db8ef542b-kube-api-access-lbtn4\") pod \"certified-operators-6bqjh\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:10 crc kubenswrapper[5058]: I1014 09:03:10.579653 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:11 crc kubenswrapper[5058]: I1014 09:03:11.127508 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6bqjh"] Oct 14 09:03:11 crc kubenswrapper[5058]: I1014 09:03:11.836435 5058 generic.go:334] "Generic (PLEG): container finished" podID="90449325-1b0e-43d5-8c95-157db8ef542b" containerID="37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd" exitCode=0 Oct 14 09:03:11 crc kubenswrapper[5058]: I1014 09:03:11.836866 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bqjh" event={"ID":"90449325-1b0e-43d5-8c95-157db8ef542b","Type":"ContainerDied","Data":"37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd"} Oct 14 09:03:11 crc kubenswrapper[5058]: I1014 09:03:11.836902 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bqjh" event={"ID":"90449325-1b0e-43d5-8c95-157db8ef542b","Type":"ContainerStarted","Data":"8979a838618af73620f6f4080d28184f568c4fc5346db1510107a571edd0cdf0"} Oct 14 09:03:13 crc kubenswrapper[5058]: I1014 09:03:13.865125 5058 generic.go:334] "Generic (PLEG): container finished" podID="90449325-1b0e-43d5-8c95-157db8ef542b" containerID="b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6" exitCode=0 Oct 14 09:03:13 crc kubenswrapper[5058]: I1014 09:03:13.865209 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bqjh" event={"ID":"90449325-1b0e-43d5-8c95-157db8ef542b","Type":"ContainerDied","Data":"b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6"} Oct 14 09:03:14 crc kubenswrapper[5058]: I1014 09:03:14.886322 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bqjh" event={"ID":"90449325-1b0e-43d5-8c95-157db8ef542b","Type":"ContainerStarted","Data":"1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c"} Oct 14 09:03:14 crc kubenswrapper[5058]: I1014 09:03:14.910001 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6bqjh" podStartSLOduration=2.395676916 podStartE2EDuration="4.909979924s" podCreationTimestamp="2025-10-14 09:03:10 +0000 UTC" firstStartedPulling="2025-10-14 09:03:11.839939367 +0000 UTC m=+8139.751023173" lastFinishedPulling="2025-10-14 09:03:14.354242365 +0000 UTC m=+8142.265326181" observedRunningTime="2025-10-14 09:03:14.906218646 +0000 UTC m=+8142.817302492" watchObservedRunningTime="2025-10-14 09:03:14.909979924 +0000 UTC m=+8142.821063760" Oct 14 09:03:15 crc kubenswrapper[5058]: I1014 09:03:15.966827 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 09:03:15 crc kubenswrapper[5058]: I1014 09:03:15.968982 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 09:03:15 crc kubenswrapper[5058]: I1014 09:03:15.975989 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 09:03:15 crc kubenswrapper[5058]: I1014 09:03:15.981199 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.062936 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.063005 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-scripts\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.063132 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.063225 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.063257 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlptc\" (UniqueName: \"kubernetes.io/projected/c7459057-7fd7-49ea-825e-e6b7f599febe-kube-api-access-mlptc\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.063343 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7459057-7fd7-49ea-825e-e6b7f599febe-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.164875 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.165145 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.165257 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlptc\" (UniqueName: \"kubernetes.io/projected/c7459057-7fd7-49ea-825e-e6b7f599febe-kube-api-access-mlptc\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.165380 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7459057-7fd7-49ea-825e-e6b7f599febe-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.165518 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.165626 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-scripts\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.165487 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7459057-7fd7-49ea-825e-e6b7f599febe-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.178543 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-scripts\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.194297 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.194983 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.197965 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlptc\" (UniqueName: \"kubernetes.io/projected/c7459057-7fd7-49ea-825e-e6b7f599febe-kube-api-access-mlptc\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.198303 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data\") pod \"cinder-scheduler-0\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.290590 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.760495 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 09:03:16 crc kubenswrapper[5058]: I1014 09:03:16.967992 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c7459057-7fd7-49ea-825e-e6b7f599febe","Type":"ContainerStarted","Data":"7435312fe655f512d39cc561cb0416a4259be8e13f8df26b5027555ac6958d27"} Oct 14 09:03:17 crc kubenswrapper[5058]: I1014 09:03:17.451486 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 09:03:17 crc kubenswrapper[5058]: I1014 09:03:17.452051 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6f414c34-b78b-4222-8177-7c60383c2089" containerName="cinder-api-log" containerID="cri-o://0fd60207141a65747939fb50726ea8db3d6173b9442934a2a1eb7bef73a2d0fb" gracePeriod=30 Oct 14 09:03:17 crc kubenswrapper[5058]: I1014 09:03:17.452181 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6f414c34-b78b-4222-8177-7c60383c2089" containerName="cinder-api" containerID="cri-o://4272392ea59d88f88351e0aceec5fefde5c4189a9b2cab6183b5f780dd421623" gracePeriod=30 Oct 14 09:03:17 crc kubenswrapper[5058]: I1014 09:03:17.981514 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c7459057-7fd7-49ea-825e-e6b7f599febe","Type":"ContainerStarted","Data":"75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff"} Oct 14 09:03:17 crc kubenswrapper[5058]: I1014 09:03:17.985089 5058 generic.go:334] "Generic (PLEG): container finished" podID="6f414c34-b78b-4222-8177-7c60383c2089" containerID="0fd60207141a65747939fb50726ea8db3d6173b9442934a2a1eb7bef73a2d0fb" exitCode=143 Oct 14 09:03:17 crc kubenswrapper[5058]: I1014 09:03:17.985132 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f414c34-b78b-4222-8177-7c60383c2089","Type":"ContainerDied","Data":"0fd60207141a65747939fb50726ea8db3d6173b9442934a2a1eb7bef73a2d0fb"} Oct 14 09:03:18 crc kubenswrapper[5058]: I1014 09:03:18.999531 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c7459057-7fd7-49ea-825e-e6b7f599febe","Type":"ContainerStarted","Data":"adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20"} Oct 14 09:03:19 crc kubenswrapper[5058]: I1014 09:03:19.036880 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.65294262 podStartE2EDuration="4.036853857s" podCreationTimestamp="2025-10-14 09:03:15 +0000 UTC" firstStartedPulling="2025-10-14 09:03:16.766823884 +0000 UTC m=+8144.677907740" lastFinishedPulling="2025-10-14 09:03:17.150735171 +0000 UTC m=+8145.061818977" observedRunningTime="2025-10-14 09:03:19.03029239 +0000 UTC m=+8146.941376266" watchObservedRunningTime="2025-10-14 09:03:19.036853857 +0000 UTC m=+8146.947937703" Oct 14 09:03:20 crc kubenswrapper[5058]: I1014 09:03:20.580436 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:20 crc kubenswrapper[5058]: I1014 09:03:20.581278 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:20 crc kubenswrapper[5058]: I1014 09:03:20.647287 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.022478 5058 generic.go:334] "Generic (PLEG): container finished" podID="6f414c34-b78b-4222-8177-7c60383c2089" containerID="4272392ea59d88f88351e0aceec5fefde5c4189a9b2cab6183b5f780dd421623" exitCode=0 Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.022577 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f414c34-b78b-4222-8177-7c60383c2089","Type":"ContainerDied","Data":"4272392ea59d88f88351e0aceec5fefde5c4189a9b2cab6183b5f780dd421623"} Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.022854 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f414c34-b78b-4222-8177-7c60383c2089","Type":"ContainerDied","Data":"0f8d0dede41e63b9b3ec6abf04e6c08aad7ef88ef3c259c8de1cfd8369e859c5"} Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.022872 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f8d0dede41e63b9b3ec6abf04e6c08aad7ef88ef3c259c8de1cfd8369e859c5" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.066263 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.108412 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.133346 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bqjh"] Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.180074 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data-custom\") pod \"6f414c34-b78b-4222-8177-7c60383c2089\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.180159 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f414c34-b78b-4222-8177-7c60383c2089-etc-machine-id\") pod \"6f414c34-b78b-4222-8177-7c60383c2089\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.180223 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c88b9\" (UniqueName: \"kubernetes.io/projected/6f414c34-b78b-4222-8177-7c60383c2089-kube-api-access-c88b9\") pod \"6f414c34-b78b-4222-8177-7c60383c2089\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.180316 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f414c34-b78b-4222-8177-7c60383c2089-logs\") pod \"6f414c34-b78b-4222-8177-7c60383c2089\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.180457 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data\") pod \"6f414c34-b78b-4222-8177-7c60383c2089\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.180553 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-combined-ca-bundle\") pod \"6f414c34-b78b-4222-8177-7c60383c2089\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.180599 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-scripts\") pod \"6f414c34-b78b-4222-8177-7c60383c2089\" (UID: \"6f414c34-b78b-4222-8177-7c60383c2089\") " Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.180449 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f414c34-b78b-4222-8177-7c60383c2089-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6f414c34-b78b-4222-8177-7c60383c2089" (UID: "6f414c34-b78b-4222-8177-7c60383c2089"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.181011 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f414c34-b78b-4222-8177-7c60383c2089-logs" (OuterVolumeSpecName: "logs") pod "6f414c34-b78b-4222-8177-7c60383c2089" (UID: "6f414c34-b78b-4222-8177-7c60383c2089"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.181347 5058 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f414c34-b78b-4222-8177-7c60383c2089-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.181373 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f414c34-b78b-4222-8177-7c60383c2089-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.189158 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f414c34-b78b-4222-8177-7c60383c2089-kube-api-access-c88b9" (OuterVolumeSpecName: "kube-api-access-c88b9") pod "6f414c34-b78b-4222-8177-7c60383c2089" (UID: "6f414c34-b78b-4222-8177-7c60383c2089"). InnerVolumeSpecName "kube-api-access-c88b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.189511 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-scripts" (OuterVolumeSpecName: "scripts") pod "6f414c34-b78b-4222-8177-7c60383c2089" (UID: "6f414c34-b78b-4222-8177-7c60383c2089"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.193427 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f414c34-b78b-4222-8177-7c60383c2089" (UID: "6f414c34-b78b-4222-8177-7c60383c2089"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.221368 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f414c34-b78b-4222-8177-7c60383c2089" (UID: "6f414c34-b78b-4222-8177-7c60383c2089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.246858 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data" (OuterVolumeSpecName: "config-data") pod "6f414c34-b78b-4222-8177-7c60383c2089" (UID: "6f414c34-b78b-4222-8177-7c60383c2089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.283002 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.283040 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.283055 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.283066 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f414c34-b78b-4222-8177-7c60383c2089-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.283080 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c88b9\" (UniqueName: \"kubernetes.io/projected/6f414c34-b78b-4222-8177-7c60383c2089-kube-api-access-c88b9\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:21 crc kubenswrapper[5058]: I1014 09:03:21.290778 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.035146 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.101876 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.114358 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.128036 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 09:03:22 crc kubenswrapper[5058]: E1014 09:03:22.128468 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f414c34-b78b-4222-8177-7c60383c2089" containerName="cinder-api" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.128483 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f414c34-b78b-4222-8177-7c60383c2089" containerName="cinder-api" Oct 14 09:03:22 crc kubenswrapper[5058]: E1014 09:03:22.128522 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f414c34-b78b-4222-8177-7c60383c2089" containerName="cinder-api-log" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.128528 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f414c34-b78b-4222-8177-7c60383c2089" containerName="cinder-api-log" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.128701 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f414c34-b78b-4222-8177-7c60383c2089" containerName="cinder-api" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.128722 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f414c34-b78b-4222-8177-7c60383c2089" containerName="cinder-api-log" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.129771 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.134394 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.137872 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.201156 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.201289 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef61a77-84c0-426d-b452-11801332fcf4-logs\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.201358 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ef61a77-84c0-426d-b452-11801332fcf4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.201506 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zp8q\" (UniqueName: \"kubernetes.io/projected/8ef61a77-84c0-426d-b452-11801332fcf4-kube-api-access-7zp8q\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.201629 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.201661 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-config-data\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.201694 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-scripts\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.304628 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ef61a77-84c0-426d-b452-11801332fcf4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.304737 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zp8q\" (UniqueName: \"kubernetes.io/projected/8ef61a77-84c0-426d-b452-11801332fcf4-kube-api-access-7zp8q\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.304848 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ef61a77-84c0-426d-b452-11801332fcf4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.304923 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-config-data\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.304980 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.305045 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-scripts\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.305327 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.305414 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef61a77-84c0-426d-b452-11801332fcf4-logs\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.306542 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef61a77-84c0-426d-b452-11801332fcf4-logs\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.312118 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.313470 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-config-data\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.313742 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.321003 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef61a77-84c0-426d-b452-11801332fcf4-scripts\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.341012 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zp8q\" (UniqueName: \"kubernetes.io/projected/8ef61a77-84c0-426d-b452-11801332fcf4-kube-api-access-7zp8q\") pod \"cinder-api-0\" (UID: \"8ef61a77-84c0-426d-b452-11801332fcf4\") " pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.456031 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 09:03:22 crc kubenswrapper[5058]: I1014 09:03:22.800069 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f414c34-b78b-4222-8177-7c60383c2089" path="/var/lib/kubelet/pods/6f414c34-b78b-4222-8177-7c60383c2089/volumes" Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.048639 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.050192 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6bqjh" podUID="90449325-1b0e-43d5-8c95-157db8ef542b" containerName="registry-server" containerID="cri-o://1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c" gracePeriod=2 Oct 14 09:03:23 crc kubenswrapper[5058]: W1014 09:03:23.057963 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ef61a77_84c0_426d_b452_11801332fcf4.slice/crio-3a1cb5970875534d64c75e5fb3345fd8594044b4393cfe580f5c883e346f8bf6 WatchSource:0}: Error finding container 3a1cb5970875534d64c75e5fb3345fd8594044b4393cfe580f5c883e346f8bf6: Status 404 returned error can't find the container with id 3a1cb5970875534d64c75e5fb3345fd8594044b4393cfe580f5c883e346f8bf6 Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.537462 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.631106 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-catalog-content\") pod \"90449325-1b0e-43d5-8c95-157db8ef542b\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.631148 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-utilities\") pod \"90449325-1b0e-43d5-8c95-157db8ef542b\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.631280 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbtn4\" (UniqueName: \"kubernetes.io/projected/90449325-1b0e-43d5-8c95-157db8ef542b-kube-api-access-lbtn4\") pod \"90449325-1b0e-43d5-8c95-157db8ef542b\" (UID: \"90449325-1b0e-43d5-8c95-157db8ef542b\") " Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.632148 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-utilities" (OuterVolumeSpecName: "utilities") pod "90449325-1b0e-43d5-8c95-157db8ef542b" (UID: "90449325-1b0e-43d5-8c95-157db8ef542b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.637322 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90449325-1b0e-43d5-8c95-157db8ef542b-kube-api-access-lbtn4" (OuterVolumeSpecName: "kube-api-access-lbtn4") pod "90449325-1b0e-43d5-8c95-157db8ef542b" (UID: "90449325-1b0e-43d5-8c95-157db8ef542b"). InnerVolumeSpecName "kube-api-access-lbtn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.733964 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbtn4\" (UniqueName: \"kubernetes.io/projected/90449325-1b0e-43d5-8c95-157db8ef542b-kube-api-access-lbtn4\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.734679 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.808373 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90449325-1b0e-43d5-8c95-157db8ef542b" (UID: "90449325-1b0e-43d5-8c95-157db8ef542b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:03:23 crc kubenswrapper[5058]: I1014 09:03:23.836356 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90449325-1b0e-43d5-8c95-157db8ef542b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.063301 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ef61a77-84c0-426d-b452-11801332fcf4","Type":"ContainerStarted","Data":"911f206c48f6bf99f1323a796457c4dbf8090bac0d9b1c476eec7fda26c6d65a"} Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.063365 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ef61a77-84c0-426d-b452-11801332fcf4","Type":"ContainerStarted","Data":"3a1cb5970875534d64c75e5fb3345fd8594044b4393cfe580f5c883e346f8bf6"} Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.069433 5058 generic.go:334] "Generic (PLEG): container finished" podID="90449325-1b0e-43d5-8c95-157db8ef542b" containerID="1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c" exitCode=0 Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.069489 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bqjh" event={"ID":"90449325-1b0e-43d5-8c95-157db8ef542b","Type":"ContainerDied","Data":"1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c"} Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.069540 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6bqjh" event={"ID":"90449325-1b0e-43d5-8c95-157db8ef542b","Type":"ContainerDied","Data":"8979a838618af73620f6f4080d28184f568c4fc5346db1510107a571edd0cdf0"} Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.069597 5058 scope.go:117] "RemoveContainer" containerID="1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.069745 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6bqjh" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.124112 5058 scope.go:117] "RemoveContainer" containerID="b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.131694 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6bqjh"] Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.161079 5058 scope.go:117] "RemoveContainer" containerID="37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.164589 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6bqjh"] Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.201768 5058 scope.go:117] "RemoveContainer" containerID="1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c" Oct 14 09:03:24 crc kubenswrapper[5058]: E1014 09:03:24.202175 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c\": container with ID starting with 1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c not found: ID does not exist" containerID="1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.202205 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c"} err="failed to get container status \"1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c\": rpc error: code = NotFound desc = could not find container \"1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c\": container with ID starting with 1dbef30579d1b2116b652b0592db99df1978b35f7c382ef9509059944561bf4c not found: ID does not exist" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.202223 5058 scope.go:117] "RemoveContainer" containerID="b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6" Oct 14 09:03:24 crc kubenswrapper[5058]: E1014 09:03:24.202493 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6\": container with ID starting with b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6 not found: ID does not exist" containerID="b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.202536 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6"} err="failed to get container status \"b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6\": rpc error: code = NotFound desc = could not find container \"b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6\": container with ID starting with b8f4c58e1a956bdf5e8b42d5e1d5cbc3c673c551f94245f66126866dc6a984d6 not found: ID does not exist" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.202562 5058 scope.go:117] "RemoveContainer" containerID="37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd" Oct 14 09:03:24 crc kubenswrapper[5058]: E1014 09:03:24.202938 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd\": container with ID starting with 37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd not found: ID does not exist" containerID="37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.202960 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd"} err="failed to get container status \"37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd\": rpc error: code = NotFound desc = could not find container \"37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd\": container with ID starting with 37688300d4fcec4dd7ee1fb3789c9ebcd37a00f9aab3b2c6cdb0096730ec9fcd not found: ID does not exist" Oct 14 09:03:24 crc kubenswrapper[5058]: E1014 09:03:24.223024 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90449325_1b0e_43d5_8c95_157db8ef542b.slice/crio-8979a838618af73620f6f4080d28184f568c4fc5346db1510107a571edd0cdf0\": RecentStats: unable to find data in memory cache]" Oct 14 09:03:24 crc kubenswrapper[5058]: I1014 09:03:24.818029 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90449325-1b0e-43d5-8c95-157db8ef542b" path="/var/lib/kubelet/pods/90449325-1b0e-43d5-8c95-157db8ef542b/volumes" Oct 14 09:03:25 crc kubenswrapper[5058]: I1014 09:03:25.090905 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ef61a77-84c0-426d-b452-11801332fcf4","Type":"ContainerStarted","Data":"8f78002d62721ca2615c1eb6e2f711bd1d04e754412d9710cea6568a9a7c0868"} Oct 14 09:03:25 crc kubenswrapper[5058]: I1014 09:03:25.093062 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 09:03:25 crc kubenswrapper[5058]: I1014 09:03:25.125393 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.125366288 podStartE2EDuration="3.125366288s" podCreationTimestamp="2025-10-14 09:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:03:25.121341653 +0000 UTC m=+8153.032425489" watchObservedRunningTime="2025-10-14 09:03:25.125366288 +0000 UTC m=+8153.036450134" Oct 14 09:03:26 crc kubenswrapper[5058]: I1014 09:03:26.585494 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 09:03:26 crc kubenswrapper[5058]: I1014 09:03:26.632962 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 09:03:27 crc kubenswrapper[5058]: I1014 09:03:27.107942 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerName="probe" containerID="cri-o://adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20" gracePeriod=30 Oct 14 09:03:27 crc kubenswrapper[5058]: I1014 09:03:27.107908 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerName="cinder-scheduler" containerID="cri-o://75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff" gracePeriod=30 Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.117951 5058 generic.go:334] "Generic (PLEG): container finished" podID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerID="adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20" exitCode=0 Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.117998 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c7459057-7fd7-49ea-825e-e6b7f599febe","Type":"ContainerDied","Data":"adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20"} Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.853071 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.958599 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-combined-ca-bundle\") pod \"c7459057-7fd7-49ea-825e-e6b7f599febe\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.958767 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-scripts\") pod \"c7459057-7fd7-49ea-825e-e6b7f599febe\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.958813 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data\") pod \"c7459057-7fd7-49ea-825e-e6b7f599febe\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.958869 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data-custom\") pod \"c7459057-7fd7-49ea-825e-e6b7f599febe\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.958953 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7459057-7fd7-49ea-825e-e6b7f599febe-etc-machine-id\") pod \"c7459057-7fd7-49ea-825e-e6b7f599febe\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.958999 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlptc\" (UniqueName: \"kubernetes.io/projected/c7459057-7fd7-49ea-825e-e6b7f599febe-kube-api-access-mlptc\") pod \"c7459057-7fd7-49ea-825e-e6b7f599febe\" (UID: \"c7459057-7fd7-49ea-825e-e6b7f599febe\") " Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.961931 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7459057-7fd7-49ea-825e-e6b7f599febe-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c7459057-7fd7-49ea-825e-e6b7f599febe" (UID: "c7459057-7fd7-49ea-825e-e6b7f599febe"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.965017 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c7459057-7fd7-49ea-825e-e6b7f599febe" (UID: "c7459057-7fd7-49ea-825e-e6b7f599febe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.965168 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7459057-7fd7-49ea-825e-e6b7f599febe-kube-api-access-mlptc" (OuterVolumeSpecName: "kube-api-access-mlptc") pod "c7459057-7fd7-49ea-825e-e6b7f599febe" (UID: "c7459057-7fd7-49ea-825e-e6b7f599febe"). InnerVolumeSpecName "kube-api-access-mlptc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:03:28 crc kubenswrapper[5058]: I1014 09:03:28.976147 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-scripts" (OuterVolumeSpecName: "scripts") pod "c7459057-7fd7-49ea-825e-e6b7f599febe" (UID: "c7459057-7fd7-49ea-825e-e6b7f599febe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.008308 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7459057-7fd7-49ea-825e-e6b7f599febe" (UID: "c7459057-7fd7-49ea-825e-e6b7f599febe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.061069 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.061099 5058 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.061111 5058 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7459057-7fd7-49ea-825e-e6b7f599febe-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.061122 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlptc\" (UniqueName: \"kubernetes.io/projected/c7459057-7fd7-49ea-825e-e6b7f599febe-kube-api-access-mlptc\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.061132 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.066278 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data" (OuterVolumeSpecName: "config-data") pod "c7459057-7fd7-49ea-825e-e6b7f599febe" (UID: "c7459057-7fd7-49ea-825e-e6b7f599febe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.129541 5058 generic.go:334] "Generic (PLEG): container finished" podID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerID="75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff" exitCode=0 Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.129735 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c7459057-7fd7-49ea-825e-e6b7f599febe","Type":"ContainerDied","Data":"75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff"} Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.130445 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c7459057-7fd7-49ea-825e-e6b7f599febe","Type":"ContainerDied","Data":"7435312fe655f512d39cc561cb0416a4259be8e13f8df26b5027555ac6958d27"} Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.130540 5058 scope.go:117] "RemoveContainer" containerID="adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.129865 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.153776 5058 scope.go:117] "RemoveContainer" containerID="75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.164283 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7459057-7fd7-49ea-825e-e6b7f599febe-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.165693 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.173106 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.186216 5058 scope.go:117] "RemoveContainer" containerID="adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20" Oct 14 09:03:29 crc kubenswrapper[5058]: E1014 09:03:29.186699 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20\": container with ID starting with adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20 not found: ID does not exist" containerID="adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.186729 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20"} err="failed to get container status \"adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20\": rpc error: code = NotFound desc = could not find container \"adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20\": container with ID starting with adb386d4833f6618f0eb951903642ae0f44002bc82b2c889d60d42a76e1c3f20 not found: ID does not exist" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.186749 5058 scope.go:117] "RemoveContainer" containerID="75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff" Oct 14 09:03:29 crc kubenswrapper[5058]: E1014 09:03:29.187250 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff\": container with ID starting with 75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff not found: ID does not exist" containerID="75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.187271 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff"} err="failed to get container status \"75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff\": rpc error: code = NotFound desc = could not find container \"75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff\": container with ID starting with 75ef6e049736e9dcf725fae89a3a67dbb173d33a7bd82d18a42290e7f8954fff not found: ID does not exist" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.193520 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 09:03:29 crc kubenswrapper[5058]: E1014 09:03:29.194189 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerName="cinder-scheduler" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.194246 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerName="cinder-scheduler" Oct 14 09:03:29 crc kubenswrapper[5058]: E1014 09:03:29.194270 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerName="probe" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.194282 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerName="probe" Oct 14 09:03:29 crc kubenswrapper[5058]: E1014 09:03:29.194340 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90449325-1b0e-43d5-8c95-157db8ef542b" containerName="extract-utilities" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.194353 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="90449325-1b0e-43d5-8c95-157db8ef542b" containerName="extract-utilities" Oct 14 09:03:29 crc kubenswrapper[5058]: E1014 09:03:29.194384 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90449325-1b0e-43d5-8c95-157db8ef542b" containerName="extract-content" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.194408 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="90449325-1b0e-43d5-8c95-157db8ef542b" containerName="extract-content" Oct 14 09:03:29 crc kubenswrapper[5058]: E1014 09:03:29.194421 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90449325-1b0e-43d5-8c95-157db8ef542b" containerName="registry-server" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.194432 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="90449325-1b0e-43d5-8c95-157db8ef542b" containerName="registry-server" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.194782 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerName="probe" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.194847 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="90449325-1b0e-43d5-8c95-157db8ef542b" containerName="registry-server" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.194882 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7459057-7fd7-49ea-825e-e6b7f599febe" containerName="cinder-scheduler" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.196439 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.201761 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.207848 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.266070 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.266637 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdrz4\" (UniqueName: \"kubernetes.io/projected/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-kube-api-access-gdrz4\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.266760 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.266886 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.267050 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-config-data\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.267241 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-scripts\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.368645 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.368915 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdrz4\" (UniqueName: \"kubernetes.io/projected/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-kube-api-access-gdrz4\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.369019 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.369571 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.369726 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-config-data\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.369888 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-scripts\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.369619 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.373663 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.375512 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-config-data\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.378485 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.382534 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-scripts\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.385385 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdrz4\" (UniqueName: \"kubernetes.io/projected/5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250-kube-api-access-gdrz4\") pod \"cinder-scheduler-0\" (UID: \"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250\") " pod="openstack/cinder-scheduler-0" Oct 14 09:03:29 crc kubenswrapper[5058]: I1014 09:03:29.521361 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 09:03:30 crc kubenswrapper[5058]: I1014 09:03:30.115339 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 09:03:30 crc kubenswrapper[5058]: I1014 09:03:30.145450 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250","Type":"ContainerStarted","Data":"850ff01b078c054ac9c3604f0a61a2d907cd766f2ed9c6d0dc82c3cf9fab7588"} Oct 14 09:03:30 crc kubenswrapper[5058]: I1014 09:03:30.814911 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7459057-7fd7-49ea-825e-e6b7f599febe" path="/var/lib/kubelet/pods/c7459057-7fd7-49ea-825e-e6b7f599febe/volumes" Oct 14 09:03:31 crc kubenswrapper[5058]: I1014 09:03:31.157856 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250","Type":"ContainerStarted","Data":"9bec335b4e9a9dfcc4eb495ecec179bd8132cd28684eaebdcd7c0d2e19213788"} Oct 14 09:03:32 crc kubenswrapper[5058]: I1014 09:03:32.171304 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250","Type":"ContainerStarted","Data":"763c05e1465836c4892f479cd96d42f53176591df4d1b8f23df1615bc3ec4c85"} Oct 14 09:03:32 crc kubenswrapper[5058]: I1014 09:03:32.210773 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.2107501689999998 podStartE2EDuration="3.210750169s" podCreationTimestamp="2025-10-14 09:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:03:32.200702372 +0000 UTC m=+8160.111786238" watchObservedRunningTime="2025-10-14 09:03:32.210750169 +0000 UTC m=+8160.121834005" Oct 14 09:03:34 crc kubenswrapper[5058]: I1014 09:03:34.308260 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 14 09:03:34 crc kubenswrapper[5058]: I1014 09:03:34.521952 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 09:03:39 crc kubenswrapper[5058]: I1014 09:03:39.867872 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 09:03:42 crc kubenswrapper[5058]: I1014 09:03:42.574906 5058 scope.go:117] "RemoveContainer" containerID="e01da2354a070e765096ff57b72a5372492233b3ea584e17d6c611b549e8458c" Oct 14 09:03:43 crc kubenswrapper[5058]: I1014 09:03:43.669684 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-48svt"] Oct 14 09:03:43 crc kubenswrapper[5058]: I1014 09:03:43.670985 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-48svt" Oct 14 09:03:43 crc kubenswrapper[5058]: I1014 09:03:43.681429 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-48svt"] Oct 14 09:03:43 crc kubenswrapper[5058]: I1014 09:03:43.761916 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k24tn\" (UniqueName: \"kubernetes.io/projected/ba118ad4-0e58-4e21-957a-70e154050f57-kube-api-access-k24tn\") pod \"glance-db-create-48svt\" (UID: \"ba118ad4-0e58-4e21-957a-70e154050f57\") " pod="openstack/glance-db-create-48svt" Oct 14 09:03:43 crc kubenswrapper[5058]: I1014 09:03:43.863563 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k24tn\" (UniqueName: \"kubernetes.io/projected/ba118ad4-0e58-4e21-957a-70e154050f57-kube-api-access-k24tn\") pod \"glance-db-create-48svt\" (UID: \"ba118ad4-0e58-4e21-957a-70e154050f57\") " pod="openstack/glance-db-create-48svt" Oct 14 09:03:43 crc kubenswrapper[5058]: I1014 09:03:43.883156 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k24tn\" (UniqueName: \"kubernetes.io/projected/ba118ad4-0e58-4e21-957a-70e154050f57-kube-api-access-k24tn\") pod \"glance-db-create-48svt\" (UID: \"ba118ad4-0e58-4e21-957a-70e154050f57\") " pod="openstack/glance-db-create-48svt" Oct 14 09:03:43 crc kubenswrapper[5058]: I1014 09:03:43.994614 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-48svt" Oct 14 09:03:44 crc kubenswrapper[5058]: I1014 09:03:44.465203 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-48svt"] Oct 14 09:03:45 crc kubenswrapper[5058]: I1014 09:03:45.331468 5058 generic.go:334] "Generic (PLEG): container finished" podID="ba118ad4-0e58-4e21-957a-70e154050f57" containerID="b1fa8409ea3951fb1af915d0fceae9dba5d0f01bcf040665961dc95997252719" exitCode=0 Oct 14 09:03:45 crc kubenswrapper[5058]: I1014 09:03:45.332002 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-48svt" event={"ID":"ba118ad4-0e58-4e21-957a-70e154050f57","Type":"ContainerDied","Data":"b1fa8409ea3951fb1af915d0fceae9dba5d0f01bcf040665961dc95997252719"} Oct 14 09:03:45 crc kubenswrapper[5058]: I1014 09:03:45.332043 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-48svt" event={"ID":"ba118ad4-0e58-4e21-957a-70e154050f57","Type":"ContainerStarted","Data":"106d9d4d0d873e964e64f5f5cbc6e4be14d465c4ffa2e4c5a8a4eb96a0297fff"} Oct 14 09:03:46 crc kubenswrapper[5058]: I1014 09:03:46.814822 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-48svt" Oct 14 09:03:46 crc kubenswrapper[5058]: I1014 09:03:46.925706 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k24tn\" (UniqueName: \"kubernetes.io/projected/ba118ad4-0e58-4e21-957a-70e154050f57-kube-api-access-k24tn\") pod \"ba118ad4-0e58-4e21-957a-70e154050f57\" (UID: \"ba118ad4-0e58-4e21-957a-70e154050f57\") " Oct 14 09:03:46 crc kubenswrapper[5058]: I1014 09:03:46.946228 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba118ad4-0e58-4e21-957a-70e154050f57-kube-api-access-k24tn" (OuterVolumeSpecName: "kube-api-access-k24tn") pod "ba118ad4-0e58-4e21-957a-70e154050f57" (UID: "ba118ad4-0e58-4e21-957a-70e154050f57"). InnerVolumeSpecName "kube-api-access-k24tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:03:47 crc kubenswrapper[5058]: I1014 09:03:47.028533 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k24tn\" (UniqueName: \"kubernetes.io/projected/ba118ad4-0e58-4e21-957a-70e154050f57-kube-api-access-k24tn\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:47 crc kubenswrapper[5058]: I1014 09:03:47.351732 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-48svt" event={"ID":"ba118ad4-0e58-4e21-957a-70e154050f57","Type":"ContainerDied","Data":"106d9d4d0d873e964e64f5f5cbc6e4be14d465c4ffa2e4c5a8a4eb96a0297fff"} Oct 14 09:03:47 crc kubenswrapper[5058]: I1014 09:03:47.351776 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="106d9d4d0d873e964e64f5f5cbc6e4be14d465c4ffa2e4c5a8a4eb96a0297fff" Oct 14 09:03:47 crc kubenswrapper[5058]: I1014 09:03:47.351779 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-48svt" Oct 14 09:03:53 crc kubenswrapper[5058]: I1014 09:03:53.754388 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-784e-account-create-wvxwt"] Oct 14 09:03:53 crc kubenswrapper[5058]: E1014 09:03:53.756040 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba118ad4-0e58-4e21-957a-70e154050f57" containerName="mariadb-database-create" Oct 14 09:03:53 crc kubenswrapper[5058]: I1014 09:03:53.756101 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba118ad4-0e58-4e21-957a-70e154050f57" containerName="mariadb-database-create" Oct 14 09:03:53 crc kubenswrapper[5058]: I1014 09:03:53.756550 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba118ad4-0e58-4e21-957a-70e154050f57" containerName="mariadb-database-create" Oct 14 09:03:53 crc kubenswrapper[5058]: I1014 09:03:53.757993 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-784e-account-create-wvxwt" Oct 14 09:03:53 crc kubenswrapper[5058]: I1014 09:03:53.763387 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 14 09:03:53 crc kubenswrapper[5058]: I1014 09:03:53.764313 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-784e-account-create-wvxwt"] Oct 14 09:03:53 crc kubenswrapper[5058]: I1014 09:03:53.874535 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk52j\" (UniqueName: \"kubernetes.io/projected/5dfff069-4a26-4862-931b-a63aafa9d8ea-kube-api-access-zk52j\") pod \"glance-784e-account-create-wvxwt\" (UID: \"5dfff069-4a26-4862-931b-a63aafa9d8ea\") " pod="openstack/glance-784e-account-create-wvxwt" Oct 14 09:03:53 crc kubenswrapper[5058]: I1014 09:03:53.977262 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk52j\" (UniqueName: \"kubernetes.io/projected/5dfff069-4a26-4862-931b-a63aafa9d8ea-kube-api-access-zk52j\") pod \"glance-784e-account-create-wvxwt\" (UID: \"5dfff069-4a26-4862-931b-a63aafa9d8ea\") " pod="openstack/glance-784e-account-create-wvxwt" Oct 14 09:03:54 crc kubenswrapper[5058]: I1014 09:03:54.010439 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk52j\" (UniqueName: \"kubernetes.io/projected/5dfff069-4a26-4862-931b-a63aafa9d8ea-kube-api-access-zk52j\") pod \"glance-784e-account-create-wvxwt\" (UID: \"5dfff069-4a26-4862-931b-a63aafa9d8ea\") " pod="openstack/glance-784e-account-create-wvxwt" Oct 14 09:03:54 crc kubenswrapper[5058]: I1014 09:03:54.095778 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-784e-account-create-wvxwt" Oct 14 09:03:54 crc kubenswrapper[5058]: I1014 09:03:54.360614 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-784e-account-create-wvxwt"] Oct 14 09:03:54 crc kubenswrapper[5058]: I1014 09:03:54.434828 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-784e-account-create-wvxwt" event={"ID":"5dfff069-4a26-4862-931b-a63aafa9d8ea","Type":"ContainerStarted","Data":"2cd9b069f7d2e045c81d71f4de0a58b4c1200287eb09afe51fe9940273f0a3c6"} Oct 14 09:03:55 crc kubenswrapper[5058]: I1014 09:03:55.449531 5058 generic.go:334] "Generic (PLEG): container finished" podID="5dfff069-4a26-4862-931b-a63aafa9d8ea" containerID="97e6ec4bc6f5195919b73b02ddcc547103828b7668659707a2d28217f69c93a6" exitCode=0 Oct 14 09:03:55 crc kubenswrapper[5058]: I1014 09:03:55.449585 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-784e-account-create-wvxwt" event={"ID":"5dfff069-4a26-4862-931b-a63aafa9d8ea","Type":"ContainerDied","Data":"97e6ec4bc6f5195919b73b02ddcc547103828b7668659707a2d28217f69c93a6"} Oct 14 09:03:56 crc kubenswrapper[5058]: I1014 09:03:56.819171 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-784e-account-create-wvxwt" Oct 14 09:03:56 crc kubenswrapper[5058]: I1014 09:03:56.827457 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk52j\" (UniqueName: \"kubernetes.io/projected/5dfff069-4a26-4862-931b-a63aafa9d8ea-kube-api-access-zk52j\") pod \"5dfff069-4a26-4862-931b-a63aafa9d8ea\" (UID: \"5dfff069-4a26-4862-931b-a63aafa9d8ea\") " Oct 14 09:03:56 crc kubenswrapper[5058]: I1014 09:03:56.835916 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dfff069-4a26-4862-931b-a63aafa9d8ea-kube-api-access-zk52j" (OuterVolumeSpecName: "kube-api-access-zk52j") pod "5dfff069-4a26-4862-931b-a63aafa9d8ea" (UID: "5dfff069-4a26-4862-931b-a63aafa9d8ea"). InnerVolumeSpecName "kube-api-access-zk52j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:03:56 crc kubenswrapper[5058]: I1014 09:03:56.930928 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk52j\" (UniqueName: \"kubernetes.io/projected/5dfff069-4a26-4862-931b-a63aafa9d8ea-kube-api-access-zk52j\") on node \"crc\" DevicePath \"\"" Oct 14 09:03:57 crc kubenswrapper[5058]: I1014 09:03:57.482066 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-784e-account-create-wvxwt" event={"ID":"5dfff069-4a26-4862-931b-a63aafa9d8ea","Type":"ContainerDied","Data":"2cd9b069f7d2e045c81d71f4de0a58b4c1200287eb09afe51fe9940273f0a3c6"} Oct 14 09:03:57 crc kubenswrapper[5058]: I1014 09:03:57.482480 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd9b069f7d2e045c81d71f4de0a58b4c1200287eb09afe51fe9940273f0a3c6" Oct 14 09:03:57 crc kubenswrapper[5058]: I1014 09:03:57.482186 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-784e-account-create-wvxwt" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.889065 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7tvpt"] Oct 14 09:03:58 crc kubenswrapper[5058]: E1014 09:03:58.900831 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfff069-4a26-4862-931b-a63aafa9d8ea" containerName="mariadb-account-create" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.900949 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfff069-4a26-4862-931b-a63aafa9d8ea" containerName="mariadb-account-create" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.901281 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dfff069-4a26-4862-931b-a63aafa9d8ea" containerName="mariadb-account-create" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.902197 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.903348 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7tvpt"] Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.905146 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.906705 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gf2t8" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.973725 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-config-data\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.973773 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-combined-ca-bundle\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.973838 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwqmk\" (UniqueName: \"kubernetes.io/projected/af3692ec-bee1-4096-bb7d-b50388aab98b-kube-api-access-qwqmk\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:58 crc kubenswrapper[5058]: I1014 09:03:58.974383 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-db-sync-config-data\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.076272 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-db-sync-config-data\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.076379 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-config-data\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.076411 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-combined-ca-bundle\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.076467 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwqmk\" (UniqueName: \"kubernetes.io/projected/af3692ec-bee1-4096-bb7d-b50388aab98b-kube-api-access-qwqmk\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.087542 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-config-data\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.091046 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-db-sync-config-data\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.093466 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-combined-ca-bundle\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.101850 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwqmk\" (UniqueName: \"kubernetes.io/projected/af3692ec-bee1-4096-bb7d-b50388aab98b-kube-api-access-qwqmk\") pod \"glance-db-sync-7tvpt\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.229094 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7tvpt" Oct 14 09:03:59 crc kubenswrapper[5058]: I1014 09:03:59.839749 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7tvpt"] Oct 14 09:04:00 crc kubenswrapper[5058]: I1014 09:04:00.520050 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7tvpt" event={"ID":"af3692ec-bee1-4096-bb7d-b50388aab98b","Type":"ContainerStarted","Data":"fd2833843f7d10cf8247d24a6325c7de3229cd74d1d1a801f481fc50bbfc2c94"} Oct 14 09:04:16 crc kubenswrapper[5058]: I1014 09:04:16.665692 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7tvpt" event={"ID":"af3692ec-bee1-4096-bb7d-b50388aab98b","Type":"ContainerStarted","Data":"c0202b1f68c9439aff6fa49f14ca51ebd5fc81198aa4377d05679bf3ce55725c"} Oct 14 09:04:16 crc kubenswrapper[5058]: I1014 09:04:16.692540 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7tvpt" podStartSLOduration=2.936020047 podStartE2EDuration="18.692520919s" podCreationTimestamp="2025-10-14 09:03:58 +0000 UTC" firstStartedPulling="2025-10-14 09:03:59.859378104 +0000 UTC m=+8187.770461910" lastFinishedPulling="2025-10-14 09:04:15.615878976 +0000 UTC m=+8203.526962782" observedRunningTime="2025-10-14 09:04:16.688138204 +0000 UTC m=+8204.599222070" watchObservedRunningTime="2025-10-14 09:04:16.692520919 +0000 UTC m=+8204.603604725" Oct 14 09:04:19 crc kubenswrapper[5058]: I1014 09:04:19.699571 5058 generic.go:334] "Generic (PLEG): container finished" podID="af3692ec-bee1-4096-bb7d-b50388aab98b" containerID="c0202b1f68c9439aff6fa49f14ca51ebd5fc81198aa4377d05679bf3ce55725c" exitCode=0 Oct 14 09:04:19 crc kubenswrapper[5058]: I1014 09:04:19.699716 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7tvpt" event={"ID":"af3692ec-bee1-4096-bb7d-b50388aab98b","Type":"ContainerDied","Data":"c0202b1f68c9439aff6fa49f14ca51ebd5fc81198aa4377d05679bf3ce55725c"} Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.157414 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7tvpt" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.256490 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-db-sync-config-data\") pod \"af3692ec-bee1-4096-bb7d-b50388aab98b\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.257037 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwqmk\" (UniqueName: \"kubernetes.io/projected/af3692ec-bee1-4096-bb7d-b50388aab98b-kube-api-access-qwqmk\") pod \"af3692ec-bee1-4096-bb7d-b50388aab98b\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.257123 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-config-data\") pod \"af3692ec-bee1-4096-bb7d-b50388aab98b\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.257238 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-combined-ca-bundle\") pod \"af3692ec-bee1-4096-bb7d-b50388aab98b\" (UID: \"af3692ec-bee1-4096-bb7d-b50388aab98b\") " Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.265372 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af3692ec-bee1-4096-bb7d-b50388aab98b-kube-api-access-qwqmk" (OuterVolumeSpecName: "kube-api-access-qwqmk") pod "af3692ec-bee1-4096-bb7d-b50388aab98b" (UID: "af3692ec-bee1-4096-bb7d-b50388aab98b"). InnerVolumeSpecName "kube-api-access-qwqmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.267099 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "af3692ec-bee1-4096-bb7d-b50388aab98b" (UID: "af3692ec-bee1-4096-bb7d-b50388aab98b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.305482 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af3692ec-bee1-4096-bb7d-b50388aab98b" (UID: "af3692ec-bee1-4096-bb7d-b50388aab98b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.334519 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-config-data" (OuterVolumeSpecName: "config-data") pod "af3692ec-bee1-4096-bb7d-b50388aab98b" (UID: "af3692ec-bee1-4096-bb7d-b50388aab98b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.368820 5058 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.368867 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwqmk\" (UniqueName: \"kubernetes.io/projected/af3692ec-bee1-4096-bb7d-b50388aab98b-kube-api-access-qwqmk\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.368889 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.368905 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3692ec-bee1-4096-bb7d-b50388aab98b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.724599 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7tvpt" event={"ID":"af3692ec-bee1-4096-bb7d-b50388aab98b","Type":"ContainerDied","Data":"fd2833843f7d10cf8247d24a6325c7de3229cd74d1d1a801f481fc50bbfc2c94"} Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.724669 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7tvpt" Oct 14 09:04:21 crc kubenswrapper[5058]: I1014 09:04:21.724724 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd2833843f7d10cf8247d24a6325c7de3229cd74d1d1a801f481fc50bbfc2c94" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.134584 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:04:22 crc kubenswrapper[5058]: E1014 09:04:22.135078 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3692ec-bee1-4096-bb7d-b50388aab98b" containerName="glance-db-sync" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.135104 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3692ec-bee1-4096-bb7d-b50388aab98b" containerName="glance-db-sync" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.135292 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3692ec-bee1-4096-bb7d-b50388aab98b" containerName="glance-db-sync" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.136349 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.138355 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gf2t8" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.139852 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.140456 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.156707 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.220060 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86597f77cc-mngmw"] Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.221914 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.242108 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86597f77cc-mngmw"] Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.282998 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.284342 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.290489 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.291017 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.291060 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.291113 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkbfq\" (UniqueName: \"kubernetes.io/projected/8b6b0087-e323-4975-b374-4801d49f6e1c-kube-api-access-xkbfq\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.291133 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.291271 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.291300 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-logs\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.297587 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393233 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jxx7\" (UniqueName: \"kubernetes.io/projected/94950ffc-adfd-4761-99a6-a458ebae9aa6-kube-api-access-8jxx7\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393284 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393325 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393355 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393390 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393417 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkbfq\" (UniqueName: \"kubernetes.io/projected/8b6b0087-e323-4975-b374-4801d49f6e1c-kube-api-access-xkbfq\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393434 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393470 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393506 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-nb\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393527 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393553 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-logs\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393566 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393591 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-config\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393608 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-dns-svc\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393627 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-logs\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393648 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pkxc\" (UniqueName: \"kubernetes.io/projected/5df55afc-7b12-41a3-b32c-26d78b94a14d-kube-api-access-4pkxc\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393665 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.393881 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.394818 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-logs\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.397341 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.397476 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.398199 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.408417 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkbfq\" (UniqueName: \"kubernetes.io/projected/8b6b0087-e323-4975-b374-4801d49f6e1c-kube-api-access-xkbfq\") pod \"glance-default-external-api-0\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.455663 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.495339 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-config\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.495535 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-dns-svc\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.495646 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-logs\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.495752 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pkxc\" (UniqueName: \"kubernetes.io/projected/5df55afc-7b12-41a3-b32c-26d78b94a14d-kube-api-access-4pkxc\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.495912 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.496090 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jxx7\" (UniqueName: \"kubernetes.io/projected/94950ffc-adfd-4761-99a6-a458ebae9aa6-kube-api-access-8jxx7\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.496221 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.496338 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.496444 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.496567 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-nb\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.496681 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.496469 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-config\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.498169 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-dns-svc\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.497745 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.497893 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-nb\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.497632 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.498569 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-logs\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.502019 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.504289 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.505832 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.512406 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jxx7\" (UniqueName: \"kubernetes.io/projected/94950ffc-adfd-4761-99a6-a458ebae9aa6-kube-api-access-8jxx7\") pod \"dnsmasq-dns-86597f77cc-mngmw\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.512450 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pkxc\" (UniqueName: \"kubernetes.io/projected/5df55afc-7b12-41a3-b32c-26d78b94a14d-kube-api-access-4pkxc\") pod \"glance-default-internal-api-0\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.544389 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:22 crc kubenswrapper[5058]: I1014 09:04:22.617208 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.066373 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:04:23 crc kubenswrapper[5058]: W1014 09:04:23.071420 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b6b0087_e323_4975_b374_4801d49f6e1c.slice/crio-194a12a7245f98892ff88913fb17d3ec9f0847ca0edf8cf89cb33b0ad9f5f718 WatchSource:0}: Error finding container 194a12a7245f98892ff88913fb17d3ec9f0847ca0edf8cf89cb33b0ad9f5f718: Status 404 returned error can't find the container with id 194a12a7245f98892ff88913fb17d3ec9f0847ca0edf8cf89cb33b0ad9f5f718 Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.074078 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86597f77cc-mngmw"] Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.294134 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:04:23 crc kubenswrapper[5058]: W1014 09:04:23.330965 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5df55afc_7b12_41a3_b32c_26d78b94a14d.slice/crio-19b0e54f13287ba6666c21e83f32d85c85bf56e94f65fc6e38c9de948a1d8673 WatchSource:0}: Error finding container 19b0e54f13287ba6666c21e83f32d85c85bf56e94f65fc6e38c9de948a1d8673: Status 404 returned error can't find the container with id 19b0e54f13287ba6666c21e83f32d85c85bf56e94f65fc6e38c9de948a1d8673 Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.756277 5058 generic.go:334] "Generic (PLEG): container finished" podID="94950ffc-adfd-4761-99a6-a458ebae9aa6" containerID="07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1" exitCode=0 Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.756374 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" event={"ID":"94950ffc-adfd-4761-99a6-a458ebae9aa6","Type":"ContainerDied","Data":"07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1"} Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.756593 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" event={"ID":"94950ffc-adfd-4761-99a6-a458ebae9aa6","Type":"ContainerStarted","Data":"1206bfe7f17e37bf1fe88482ab30eeb28ae1f62324451642eed7ecd73e5e85b5"} Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.768784 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5df55afc-7b12-41a3-b32c-26d78b94a14d","Type":"ContainerStarted","Data":"19b0e54f13287ba6666c21e83f32d85c85bf56e94f65fc6e38c9de948a1d8673"} Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.771422 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8b6b0087-e323-4975-b374-4801d49f6e1c","Type":"ContainerStarted","Data":"3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d"} Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.771477 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8b6b0087-e323-4975-b374-4801d49f6e1c","Type":"ContainerStarted","Data":"194a12a7245f98892ff88913fb17d3ec9f0847ca0edf8cf89cb33b0ad9f5f718"} Oct 14 09:04:23 crc kubenswrapper[5058]: I1014 09:04:23.809869 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.783014 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8b6b0087-e323-4975-b374-4801d49f6e1c","Type":"ContainerStarted","Data":"ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8"} Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.783123 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerName="glance-log" containerID="cri-o://3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d" gracePeriod=30 Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.783258 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerName="glance-httpd" containerID="cri-o://ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8" gracePeriod=30 Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.812786 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.812760529 podStartE2EDuration="2.812760529s" podCreationTimestamp="2025-10-14 09:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:04:24.800311203 +0000 UTC m=+8212.711395009" watchObservedRunningTime="2025-10-14 09:04:24.812760529 +0000 UTC m=+8212.723844335" Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.831649 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" podStartSLOduration=2.831632348 podStartE2EDuration="2.831632348s" podCreationTimestamp="2025-10-14 09:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:04:24.818115982 +0000 UTC m=+8212.729199788" watchObservedRunningTime="2025-10-14 09:04:24.831632348 +0000 UTC m=+8212.742716154" Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.839976 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.840014 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" event={"ID":"94950ffc-adfd-4761-99a6-a458ebae9aa6","Type":"ContainerStarted","Data":"82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6"} Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.840033 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5df55afc-7b12-41a3-b32c-26d78b94a14d","Type":"ContainerStarted","Data":"1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d"} Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.840054 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5df55afc-7b12-41a3-b32c-26d78b94a14d","Type":"ContainerStarted","Data":"6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb"} Oct 14 09:04:24 crc kubenswrapper[5058]: I1014 09:04:24.859756 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.8597360419999998 podStartE2EDuration="2.859736042s" podCreationTimestamp="2025-10-14 09:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:04:24.854901784 +0000 UTC m=+8212.765985600" watchObservedRunningTime="2025-10-14 09:04:24.859736042 +0000 UTC m=+8212.770819848" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.433586 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.561323 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-config-data\") pod \"8b6b0087-e323-4975-b374-4801d49f6e1c\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.561373 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-combined-ca-bundle\") pod \"8b6b0087-e323-4975-b374-4801d49f6e1c\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.561455 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-scripts\") pod \"8b6b0087-e323-4975-b374-4801d49f6e1c\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.561483 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-logs\") pod \"8b6b0087-e323-4975-b374-4801d49f6e1c\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.561524 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-httpd-run\") pod \"8b6b0087-e323-4975-b374-4801d49f6e1c\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.561649 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkbfq\" (UniqueName: \"kubernetes.io/projected/8b6b0087-e323-4975-b374-4801d49f6e1c-kube-api-access-xkbfq\") pod \"8b6b0087-e323-4975-b374-4801d49f6e1c\" (UID: \"8b6b0087-e323-4975-b374-4801d49f6e1c\") " Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.562154 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-logs" (OuterVolumeSpecName: "logs") pod "8b6b0087-e323-4975-b374-4801d49f6e1c" (UID: "8b6b0087-e323-4975-b374-4801d49f6e1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.562271 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8b6b0087-e323-4975-b374-4801d49f6e1c" (UID: "8b6b0087-e323-4975-b374-4801d49f6e1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.566230 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-scripts" (OuterVolumeSpecName: "scripts") pod "8b6b0087-e323-4975-b374-4801d49f6e1c" (UID: "8b6b0087-e323-4975-b374-4801d49f6e1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.569071 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6b0087-e323-4975-b374-4801d49f6e1c-kube-api-access-xkbfq" (OuterVolumeSpecName: "kube-api-access-xkbfq") pod "8b6b0087-e323-4975-b374-4801d49f6e1c" (UID: "8b6b0087-e323-4975-b374-4801d49f6e1c"). InnerVolumeSpecName "kube-api-access-xkbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.611265 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b6b0087-e323-4975-b374-4801d49f6e1c" (UID: "8b6b0087-e323-4975-b374-4801d49f6e1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.624300 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-config-data" (OuterVolumeSpecName: "config-data") pod "8b6b0087-e323-4975-b374-4801d49f6e1c" (UID: "8b6b0087-e323-4975-b374-4801d49f6e1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.664617 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.664646 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkbfq\" (UniqueName: \"kubernetes.io/projected/8b6b0087-e323-4975-b374-4801d49f6e1c-kube-api-access-xkbfq\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.664656 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.664665 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.664673 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b6b0087-e323-4975-b374-4801d49f6e1c-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.664681 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6b0087-e323-4975-b374-4801d49f6e1c-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.846888 5058 generic.go:334] "Generic (PLEG): container finished" podID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerID="ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8" exitCode=0 Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.847100 5058 generic.go:334] "Generic (PLEG): container finished" podID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerID="3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d" exitCode=143 Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.846978 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.846983 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8b6b0087-e323-4975-b374-4801d49f6e1c","Type":"ContainerDied","Data":"ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8"} Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.847227 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8b6b0087-e323-4975-b374-4801d49f6e1c","Type":"ContainerDied","Data":"3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d"} Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.847241 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8b6b0087-e323-4975-b374-4801d49f6e1c","Type":"ContainerDied","Data":"194a12a7245f98892ff88913fb17d3ec9f0847ca0edf8cf89cb33b0ad9f5f718"} Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.847322 5058 scope.go:117] "RemoveContainer" containerID="ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.874701 5058 scope.go:117] "RemoveContainer" containerID="3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.881396 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.906508 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.906772 5058 scope.go:117] "RemoveContainer" containerID="ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8" Oct 14 09:04:25 crc kubenswrapper[5058]: E1014 09:04:25.908408 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8\": container with ID starting with ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8 not found: ID does not exist" containerID="ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.909361 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8"} err="failed to get container status \"ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8\": rpc error: code = NotFound desc = could not find container \"ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8\": container with ID starting with ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8 not found: ID does not exist" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.909526 5058 scope.go:117] "RemoveContainer" containerID="3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d" Oct 14 09:04:25 crc kubenswrapper[5058]: E1014 09:04:25.910093 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d\": container with ID starting with 3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d not found: ID does not exist" containerID="3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.910134 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d"} err="failed to get container status \"3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d\": rpc error: code = NotFound desc = could not find container \"3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d\": container with ID starting with 3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d not found: ID does not exist" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.910158 5058 scope.go:117] "RemoveContainer" containerID="ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.916498 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8"} err="failed to get container status \"ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8\": rpc error: code = NotFound desc = could not find container \"ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8\": container with ID starting with ee0638c6418f915891ff11b363e2afd0a372fd0075e692ca211d220e3acb17a8 not found: ID does not exist" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.916538 5058 scope.go:117] "RemoveContainer" containerID="3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.916767 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.916933 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d"} err="failed to get container status \"3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d\": rpc error: code = NotFound desc = could not find container \"3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d\": container with ID starting with 3c4da9adb13f13dedf4ab5d2e930ce8ea1dac7d555c299dedd0aa4cd17067e5d not found: ID does not exist" Oct 14 09:04:25 crc kubenswrapper[5058]: E1014 09:04:25.917608 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerName="glance-log" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.917726 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerName="glance-log" Oct 14 09:04:25 crc kubenswrapper[5058]: E1014 09:04:25.917949 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerName="glance-httpd" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.918045 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerName="glance-httpd" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.918506 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerName="glance-log" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.918641 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6b0087-e323-4975-b374-4801d49f6e1c" containerName="glance-httpd" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.920750 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.923623 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.924560 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.984562 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-scripts\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.984615 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knl5n\" (UniqueName: \"kubernetes.io/projected/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-kube-api-access-knl5n\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.984638 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.984662 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-config-data\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.984739 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-logs\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:25 crc kubenswrapper[5058]: I1014 09:04:25.984869 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.086452 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.086853 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-scripts\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.087018 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knl5n\" (UniqueName: \"kubernetes.io/projected/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-kube-api-access-knl5n\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.086917 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.087160 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.087330 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-config-data\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.087384 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-logs\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.087855 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-logs\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.095913 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.108447 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-scripts\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.122151 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-config-data\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.123113 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.149569 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knl5n\" (UniqueName: \"kubernetes.io/projected/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-kube-api-access-knl5n\") pod \"glance-default-external-api-0\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.277565 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.804494 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6b0087-e323-4975-b374-4801d49f6e1c" path="/var/lib/kubelet/pods/8b6b0087-e323-4975-b374-4801d49f6e1c/volumes" Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.862388 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerName="glance-log" containerID="cri-o://6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb" gracePeriod=30 Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.862451 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerName="glance-httpd" containerID="cri-o://1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d" gracePeriod=30 Oct 14 09:04:26 crc kubenswrapper[5058]: I1014 09:04:26.884389 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:04:26 crc kubenswrapper[5058]: W1014 09:04:26.886936 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46eea8ed_0b19_4233_9a00_e3fd1e5a5a15.slice/crio-d54350ad5935d7c1e96ce861496c51b2f5a8f404a715946c80466559fc884b0f WatchSource:0}: Error finding container d54350ad5935d7c1e96ce861496c51b2f5a8f404a715946c80466559fc884b0f: Status 404 returned error can't find the container with id d54350ad5935d7c1e96ce861496c51b2f5a8f404a715946c80466559fc884b0f Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.446618 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.613846 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-config-data\") pod \"5df55afc-7b12-41a3-b32c-26d78b94a14d\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.614246 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pkxc\" (UniqueName: \"kubernetes.io/projected/5df55afc-7b12-41a3-b32c-26d78b94a14d-kube-api-access-4pkxc\") pod \"5df55afc-7b12-41a3-b32c-26d78b94a14d\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.614395 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-scripts\") pod \"5df55afc-7b12-41a3-b32c-26d78b94a14d\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.614486 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-httpd-run\") pod \"5df55afc-7b12-41a3-b32c-26d78b94a14d\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.617715 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-logs\") pod \"5df55afc-7b12-41a3-b32c-26d78b94a14d\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.617830 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-combined-ca-bundle\") pod \"5df55afc-7b12-41a3-b32c-26d78b94a14d\" (UID: \"5df55afc-7b12-41a3-b32c-26d78b94a14d\") " Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.619008 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df55afc-7b12-41a3-b32c-26d78b94a14d-kube-api-access-4pkxc" (OuterVolumeSpecName: "kube-api-access-4pkxc") pod "5df55afc-7b12-41a3-b32c-26d78b94a14d" (UID: "5df55afc-7b12-41a3-b32c-26d78b94a14d"). InnerVolumeSpecName "kube-api-access-4pkxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.619345 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5df55afc-7b12-41a3-b32c-26d78b94a14d" (UID: "5df55afc-7b12-41a3-b32c-26d78b94a14d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.620652 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-logs" (OuterVolumeSpecName: "logs") pod "5df55afc-7b12-41a3-b32c-26d78b94a14d" (UID: "5df55afc-7b12-41a3-b32c-26d78b94a14d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.622349 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-scripts" (OuterVolumeSpecName: "scripts") pod "5df55afc-7b12-41a3-b32c-26d78b94a14d" (UID: "5df55afc-7b12-41a3-b32c-26d78b94a14d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.647098 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5df55afc-7b12-41a3-b32c-26d78b94a14d" (UID: "5df55afc-7b12-41a3-b32c-26d78b94a14d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.696272 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-config-data" (OuterVolumeSpecName: "config-data") pod "5df55afc-7b12-41a3-b32c-26d78b94a14d" (UID: "5df55afc-7b12-41a3-b32c-26d78b94a14d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.720937 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.720968 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pkxc\" (UniqueName: \"kubernetes.io/projected/5df55afc-7b12-41a3-b32c-26d78b94a14d-kube-api-access-4pkxc\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.720978 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.720985 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.720994 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df55afc-7b12-41a3-b32c-26d78b94a14d-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.721002 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df55afc-7b12-41a3-b32c-26d78b94a14d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.883035 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15","Type":"ContainerStarted","Data":"dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910"} Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.883101 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15","Type":"ContainerStarted","Data":"d54350ad5935d7c1e96ce861496c51b2f5a8f404a715946c80466559fc884b0f"} Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.887411 5058 generic.go:334] "Generic (PLEG): container finished" podID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerID="1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d" exitCode=0 Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.887450 5058 generic.go:334] "Generic (PLEG): container finished" podID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerID="6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb" exitCode=143 Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.887466 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.887488 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5df55afc-7b12-41a3-b32c-26d78b94a14d","Type":"ContainerDied","Data":"1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d"} Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.887552 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5df55afc-7b12-41a3-b32c-26d78b94a14d","Type":"ContainerDied","Data":"6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb"} Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.887576 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5df55afc-7b12-41a3-b32c-26d78b94a14d","Type":"ContainerDied","Data":"19b0e54f13287ba6666c21e83f32d85c85bf56e94f65fc6e38c9de948a1d8673"} Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.887599 5058 scope.go:117] "RemoveContainer" containerID="1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.925207 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.927200 5058 scope.go:117] "RemoveContainer" containerID="6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.933344 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.960044 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:04:27 crc kubenswrapper[5058]: E1014 09:04:27.960398 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerName="glance-log" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.960416 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerName="glance-log" Oct 14 09:04:27 crc kubenswrapper[5058]: E1014 09:04:27.960451 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerName="glance-httpd" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.960457 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerName="glance-httpd" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.960621 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerName="glance-log" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.960643 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df55afc-7b12-41a3-b32c-26d78b94a14d" containerName="glance-httpd" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.962764 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.964765 5058 scope.go:117] "RemoveContainer" containerID="1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.966522 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.968965 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 09:04:27 crc kubenswrapper[5058]: E1014 09:04:27.974217 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d\": container with ID starting with 1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d not found: ID does not exist" containerID="1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.974264 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d"} err="failed to get container status \"1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d\": rpc error: code = NotFound desc = could not find container \"1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d\": container with ID starting with 1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d not found: ID does not exist" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.974292 5058 scope.go:117] "RemoveContainer" containerID="6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb" Oct 14 09:04:27 crc kubenswrapper[5058]: E1014 09:04:27.976898 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb\": container with ID starting with 6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb not found: ID does not exist" containerID="6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.976935 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb"} err="failed to get container status \"6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb\": rpc error: code = NotFound desc = could not find container \"6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb\": container with ID starting with 6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb not found: ID does not exist" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.976957 5058 scope.go:117] "RemoveContainer" containerID="1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.977260 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d"} err="failed to get container status \"1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d\": rpc error: code = NotFound desc = could not find container \"1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d\": container with ID starting with 1ab47b99e9ee197eca9988fc9f2dadbb6a7db85e1805a841211a8715a03c630d not found: ID does not exist" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.977275 5058 scope.go:117] "RemoveContainer" containerID="6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb" Oct 14 09:04:27 crc kubenswrapper[5058]: I1014 09:04:27.977437 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb"} err="failed to get container status \"6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb\": rpc error: code = NotFound desc = could not find container \"6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb\": container with ID starting with 6b3e20c84681a032775a69adcfdb359be1f23dfc4e2a13b6afb8b0cbb7fabbdb not found: ID does not exist" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.131850 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.131914 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.131946 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.132058 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq2wk\" (UniqueName: \"kubernetes.io/projected/f727bef8-4971-4bc5-80d2-23eb4840bfbb-kube-api-access-mq2wk\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.132140 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-logs\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.132169 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.233998 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.234332 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.234354 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.234423 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq2wk\" (UniqueName: \"kubernetes.io/projected/f727bef8-4971-4bc5-80d2-23eb4840bfbb-kube-api-access-mq2wk\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.234499 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-logs\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.234530 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.235325 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.240159 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-logs\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.242568 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.248472 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.249155 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.252480 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq2wk\" (UniqueName: \"kubernetes.io/projected/f727bef8-4971-4bc5-80d2-23eb4840bfbb-kube-api-access-mq2wk\") pod \"glance-default-internal-api-0\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.280354 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.807550 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df55afc-7b12-41a3-b32c-26d78b94a14d" path="/var/lib/kubelet/pods/5df55afc-7b12-41a3-b32c-26d78b94a14d/volumes" Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.862153 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.902667 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15","Type":"ContainerStarted","Data":"123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b"} Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.907647 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f727bef8-4971-4bc5-80d2-23eb4840bfbb","Type":"ContainerStarted","Data":"0c400045f893859d277f3296e3956be484a7ac57177075a8e5784683da5b0c3c"} Oct 14 09:04:28 crc kubenswrapper[5058]: I1014 09:04:28.929508 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.9294826819999997 podStartE2EDuration="3.929482682s" podCreationTimestamp="2025-10-14 09:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:04:28.920088474 +0000 UTC m=+8216.831172300" watchObservedRunningTime="2025-10-14 09:04:28.929482682 +0000 UTC m=+8216.840566498" Oct 14 09:04:29 crc kubenswrapper[5058]: I1014 09:04:29.919368 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f727bef8-4971-4bc5-80d2-23eb4840bfbb","Type":"ContainerStarted","Data":"bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6"} Oct 14 09:04:29 crc kubenswrapper[5058]: I1014 09:04:29.920106 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f727bef8-4971-4bc5-80d2-23eb4840bfbb","Type":"ContainerStarted","Data":"ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718"} Oct 14 09:04:29 crc kubenswrapper[5058]: I1014 09:04:29.957339 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.952310686 podStartE2EDuration="2.952310686s" podCreationTimestamp="2025-10-14 09:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:04:29.939219151 +0000 UTC m=+8217.850302997" watchObservedRunningTime="2025-10-14 09:04:29.952310686 +0000 UTC m=+8217.863394512" Oct 14 09:04:32 crc kubenswrapper[5058]: I1014 09:04:32.547199 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:04:32 crc kubenswrapper[5058]: I1014 09:04:32.638380 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7976679757-vfnsv"] Oct 14 09:04:32 crc kubenswrapper[5058]: I1014 09:04:32.638951 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7976679757-vfnsv" podUID="9512bdc2-4a78-401e-a9a6-f0c7da207183" containerName="dnsmasq-dns" containerID="cri-o://c071a029db34610e11b3ad742f349ed825f1b071a4c4b0c7ffa2c4422c9e8d85" gracePeriod=10 Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.019439 5058 generic.go:334] "Generic (PLEG): container finished" podID="9512bdc2-4a78-401e-a9a6-f0c7da207183" containerID="c071a029db34610e11b3ad742f349ed825f1b071a4c4b0c7ffa2c4422c9e8d85" exitCode=0 Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.019526 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976679757-vfnsv" event={"ID":"9512bdc2-4a78-401e-a9a6-f0c7da207183","Type":"ContainerDied","Data":"c071a029db34610e11b3ad742f349ed825f1b071a4c4b0c7ffa2c4422c9e8d85"} Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.209242 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.361366 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-config\") pod \"9512bdc2-4a78-401e-a9a6-f0c7da207183\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.361452 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-sb\") pod \"9512bdc2-4a78-401e-a9a6-f0c7da207183\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.361515 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-dns-svc\") pod \"9512bdc2-4a78-401e-a9a6-f0c7da207183\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.361564 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5vxv\" (UniqueName: \"kubernetes.io/projected/9512bdc2-4a78-401e-a9a6-f0c7da207183-kube-api-access-s5vxv\") pod \"9512bdc2-4a78-401e-a9a6-f0c7da207183\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.361593 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-nb\") pod \"9512bdc2-4a78-401e-a9a6-f0c7da207183\" (UID: \"9512bdc2-4a78-401e-a9a6-f0c7da207183\") " Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.377097 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9512bdc2-4a78-401e-a9a6-f0c7da207183-kube-api-access-s5vxv" (OuterVolumeSpecName: "kube-api-access-s5vxv") pod "9512bdc2-4a78-401e-a9a6-f0c7da207183" (UID: "9512bdc2-4a78-401e-a9a6-f0c7da207183"). InnerVolumeSpecName "kube-api-access-s5vxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.404481 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-config" (OuterVolumeSpecName: "config") pod "9512bdc2-4a78-401e-a9a6-f0c7da207183" (UID: "9512bdc2-4a78-401e-a9a6-f0c7da207183"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.404994 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9512bdc2-4a78-401e-a9a6-f0c7da207183" (UID: "9512bdc2-4a78-401e-a9a6-f0c7da207183"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.428782 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9512bdc2-4a78-401e-a9a6-f0c7da207183" (UID: "9512bdc2-4a78-401e-a9a6-f0c7da207183"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.434405 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9512bdc2-4a78-401e-a9a6-f0c7da207183" (UID: "9512bdc2-4a78-401e-a9a6-f0c7da207183"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.464186 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.464239 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5vxv\" (UniqueName: \"kubernetes.io/projected/9512bdc2-4a78-401e-a9a6-f0c7da207183-kube-api-access-s5vxv\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.464263 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.464285 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:33 crc kubenswrapper[5058]: I1014 09:04:33.464305 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9512bdc2-4a78-401e-a9a6-f0c7da207183-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:34 crc kubenswrapper[5058]: I1014 09:04:34.037134 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976679757-vfnsv" event={"ID":"9512bdc2-4a78-401e-a9a6-f0c7da207183","Type":"ContainerDied","Data":"7789c9b81b88c38f6615228739b95c350b13cf1c063bab9c578354123554bcd9"} Oct 14 09:04:34 crc kubenswrapper[5058]: I1014 09:04:34.037211 5058 scope.go:117] "RemoveContainer" containerID="c071a029db34610e11b3ad742f349ed825f1b071a4c4b0c7ffa2c4422c9e8d85" Oct 14 09:04:34 crc kubenswrapper[5058]: I1014 09:04:34.037294 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7976679757-vfnsv" Oct 14 09:04:34 crc kubenswrapper[5058]: I1014 09:04:34.080533 5058 scope.go:117] "RemoveContainer" containerID="38f170261852408abb68bbfe9e248849b9f2a88cbbd6deeb218a2543421b6928" Oct 14 09:04:34 crc kubenswrapper[5058]: I1014 09:04:34.101753 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7976679757-vfnsv"] Oct 14 09:04:34 crc kubenswrapper[5058]: I1014 09:04:34.116869 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7976679757-vfnsv"] Oct 14 09:04:34 crc kubenswrapper[5058]: I1014 09:04:34.804462 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9512bdc2-4a78-401e-a9a6-f0c7da207183" path="/var/lib/kubelet/pods/9512bdc2-4a78-401e-a9a6-f0c7da207183/volumes" Oct 14 09:04:36 crc kubenswrapper[5058]: I1014 09:04:36.278139 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 09:04:36 crc kubenswrapper[5058]: I1014 09:04:36.278526 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 09:04:36 crc kubenswrapper[5058]: I1014 09:04:36.317266 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 09:04:36 crc kubenswrapper[5058]: I1014 09:04:36.357734 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 09:04:37 crc kubenswrapper[5058]: I1014 09:04:37.074708 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 09:04:37 crc kubenswrapper[5058]: I1014 09:04:37.074765 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 09:04:38 crc kubenswrapper[5058]: I1014 09:04:38.281908 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:38 crc kubenswrapper[5058]: I1014 09:04:38.282303 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:38 crc kubenswrapper[5058]: I1014 09:04:38.321787 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:38 crc kubenswrapper[5058]: I1014 09:04:38.378662 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:39 crc kubenswrapper[5058]: I1014 09:04:39.099784 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:39 crc kubenswrapper[5058]: I1014 09:04:39.099890 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:39 crc kubenswrapper[5058]: I1014 09:04:39.514250 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 09:04:39 crc kubenswrapper[5058]: I1014 09:04:39.514496 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 09:04:39 crc kubenswrapper[5058]: I1014 09:04:39.516262 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 09:04:40 crc kubenswrapper[5058]: I1014 09:04:40.899622 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:40 crc kubenswrapper[5058]: I1014 09:04:40.906683 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 09:04:47 crc kubenswrapper[5058]: I1014 09:04:47.790597 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6hm4l"] Oct 14 09:04:47 crc kubenswrapper[5058]: E1014 09:04:47.791369 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9512bdc2-4a78-401e-a9a6-f0c7da207183" containerName="init" Oct 14 09:04:47 crc kubenswrapper[5058]: I1014 09:04:47.791382 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9512bdc2-4a78-401e-a9a6-f0c7da207183" containerName="init" Oct 14 09:04:47 crc kubenswrapper[5058]: E1014 09:04:47.791422 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9512bdc2-4a78-401e-a9a6-f0c7da207183" containerName="dnsmasq-dns" Oct 14 09:04:47 crc kubenswrapper[5058]: I1014 09:04:47.791428 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9512bdc2-4a78-401e-a9a6-f0c7da207183" containerName="dnsmasq-dns" Oct 14 09:04:47 crc kubenswrapper[5058]: I1014 09:04:47.791636 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9512bdc2-4a78-401e-a9a6-f0c7da207183" containerName="dnsmasq-dns" Oct 14 09:04:47 crc kubenswrapper[5058]: I1014 09:04:47.792432 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6hm4l" Oct 14 09:04:47 crc kubenswrapper[5058]: I1014 09:04:47.805766 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6hm4l"] Oct 14 09:04:47 crc kubenswrapper[5058]: I1014 09:04:47.880643 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqfg6\" (UniqueName: \"kubernetes.io/projected/dcba8df7-31f3-4c31-84ef-0c687e1ad241-kube-api-access-fqfg6\") pod \"placement-db-create-6hm4l\" (UID: \"dcba8df7-31f3-4c31-84ef-0c687e1ad241\") " pod="openstack/placement-db-create-6hm4l" Oct 14 09:04:47 crc kubenswrapper[5058]: I1014 09:04:47.985321 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqfg6\" (UniqueName: \"kubernetes.io/projected/dcba8df7-31f3-4c31-84ef-0c687e1ad241-kube-api-access-fqfg6\") pod \"placement-db-create-6hm4l\" (UID: \"dcba8df7-31f3-4c31-84ef-0c687e1ad241\") " pod="openstack/placement-db-create-6hm4l" Oct 14 09:04:48 crc kubenswrapper[5058]: I1014 09:04:48.008611 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqfg6\" (UniqueName: \"kubernetes.io/projected/dcba8df7-31f3-4c31-84ef-0c687e1ad241-kube-api-access-fqfg6\") pod \"placement-db-create-6hm4l\" (UID: \"dcba8df7-31f3-4c31-84ef-0c687e1ad241\") " pod="openstack/placement-db-create-6hm4l" Oct 14 09:04:48 crc kubenswrapper[5058]: I1014 09:04:48.114675 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6hm4l" Oct 14 09:04:48 crc kubenswrapper[5058]: I1014 09:04:48.435445 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6hm4l"] Oct 14 09:04:48 crc kubenswrapper[5058]: W1014 09:04:48.441454 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcba8df7_31f3_4c31_84ef_0c687e1ad241.slice/crio-0ddfbcedd8850ed034f6a0eefcadde732f2d0782674e76598ad7823400d68acb WatchSource:0}: Error finding container 0ddfbcedd8850ed034f6a0eefcadde732f2d0782674e76598ad7823400d68acb: Status 404 returned error can't find the container with id 0ddfbcedd8850ed034f6a0eefcadde732f2d0782674e76598ad7823400d68acb Oct 14 09:04:49 crc kubenswrapper[5058]: I1014 09:04:49.247406 5058 generic.go:334] "Generic (PLEG): container finished" podID="dcba8df7-31f3-4c31-84ef-0c687e1ad241" containerID="f78142137be3dd05b8966b9327f89a2758cf54b7d246b06987216eebc1c23e4a" exitCode=0 Oct 14 09:04:49 crc kubenswrapper[5058]: I1014 09:04:49.247465 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6hm4l" event={"ID":"dcba8df7-31f3-4c31-84ef-0c687e1ad241","Type":"ContainerDied","Data":"f78142137be3dd05b8966b9327f89a2758cf54b7d246b06987216eebc1c23e4a"} Oct 14 09:04:49 crc kubenswrapper[5058]: I1014 09:04:49.247750 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6hm4l" event={"ID":"dcba8df7-31f3-4c31-84ef-0c687e1ad241","Type":"ContainerStarted","Data":"0ddfbcedd8850ed034f6a0eefcadde732f2d0782674e76598ad7823400d68acb"} Oct 14 09:04:50 crc kubenswrapper[5058]: I1014 09:04:50.731129 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6hm4l" Oct 14 09:04:50 crc kubenswrapper[5058]: I1014 09:04:50.852698 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqfg6\" (UniqueName: \"kubernetes.io/projected/dcba8df7-31f3-4c31-84ef-0c687e1ad241-kube-api-access-fqfg6\") pod \"dcba8df7-31f3-4c31-84ef-0c687e1ad241\" (UID: \"dcba8df7-31f3-4c31-84ef-0c687e1ad241\") " Oct 14 09:04:50 crc kubenswrapper[5058]: I1014 09:04:50.861658 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcba8df7-31f3-4c31-84ef-0c687e1ad241-kube-api-access-fqfg6" (OuterVolumeSpecName: "kube-api-access-fqfg6") pod "dcba8df7-31f3-4c31-84ef-0c687e1ad241" (UID: "dcba8df7-31f3-4c31-84ef-0c687e1ad241"). InnerVolumeSpecName "kube-api-access-fqfg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:04:50 crc kubenswrapper[5058]: I1014 09:04:50.955495 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqfg6\" (UniqueName: \"kubernetes.io/projected/dcba8df7-31f3-4c31-84ef-0c687e1ad241-kube-api-access-fqfg6\") on node \"crc\" DevicePath \"\"" Oct 14 09:04:51 crc kubenswrapper[5058]: I1014 09:04:51.273407 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6hm4l" event={"ID":"dcba8df7-31f3-4c31-84ef-0c687e1ad241","Type":"ContainerDied","Data":"0ddfbcedd8850ed034f6a0eefcadde732f2d0782674e76598ad7823400d68acb"} Oct 14 09:04:51 crc kubenswrapper[5058]: I1014 09:04:51.273789 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddfbcedd8850ed034f6a0eefcadde732f2d0782674e76598ad7823400d68acb" Oct 14 09:04:51 crc kubenswrapper[5058]: I1014 09:04:51.273529 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6hm4l" Oct 14 09:04:57 crc kubenswrapper[5058]: I1014 09:04:57.865893 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7e14-account-create-mj6zp"] Oct 14 09:04:57 crc kubenswrapper[5058]: E1014 09:04:57.866887 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcba8df7-31f3-4c31-84ef-0c687e1ad241" containerName="mariadb-database-create" Oct 14 09:04:57 crc kubenswrapper[5058]: I1014 09:04:57.866903 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcba8df7-31f3-4c31-84ef-0c687e1ad241" containerName="mariadb-database-create" Oct 14 09:04:57 crc kubenswrapper[5058]: I1014 09:04:57.867130 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcba8df7-31f3-4c31-84ef-0c687e1ad241" containerName="mariadb-database-create" Oct 14 09:04:57 crc kubenswrapper[5058]: I1014 09:04:57.868001 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7e14-account-create-mj6zp" Oct 14 09:04:57 crc kubenswrapper[5058]: I1014 09:04:57.874240 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 14 09:04:57 crc kubenswrapper[5058]: I1014 09:04:57.890411 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7e14-account-create-mj6zp"] Oct 14 09:04:58 crc kubenswrapper[5058]: I1014 09:04:58.005475 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlhxb\" (UniqueName: \"kubernetes.io/projected/aad1878c-a41e-4233-9828-414e241e1a70-kube-api-access-nlhxb\") pod \"placement-7e14-account-create-mj6zp\" (UID: \"aad1878c-a41e-4233-9828-414e241e1a70\") " pod="openstack/placement-7e14-account-create-mj6zp" Oct 14 09:04:58 crc kubenswrapper[5058]: I1014 09:04:58.107820 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlhxb\" (UniqueName: \"kubernetes.io/projected/aad1878c-a41e-4233-9828-414e241e1a70-kube-api-access-nlhxb\") pod \"placement-7e14-account-create-mj6zp\" (UID: \"aad1878c-a41e-4233-9828-414e241e1a70\") " pod="openstack/placement-7e14-account-create-mj6zp" Oct 14 09:04:58 crc kubenswrapper[5058]: I1014 09:04:58.135038 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlhxb\" (UniqueName: \"kubernetes.io/projected/aad1878c-a41e-4233-9828-414e241e1a70-kube-api-access-nlhxb\") pod \"placement-7e14-account-create-mj6zp\" (UID: \"aad1878c-a41e-4233-9828-414e241e1a70\") " pod="openstack/placement-7e14-account-create-mj6zp" Oct 14 09:04:58 crc kubenswrapper[5058]: I1014 09:04:58.190346 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7e14-account-create-mj6zp" Oct 14 09:04:58 crc kubenswrapper[5058]: I1014 09:04:58.712469 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7e14-account-create-mj6zp"] Oct 14 09:04:58 crc kubenswrapper[5058]: W1014 09:04:58.720495 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaad1878c_a41e_4233_9828_414e241e1a70.slice/crio-7262d5d7cb34f3cae870136afaf64f153dc254d98fb23e298fae05f5200563f1 WatchSource:0}: Error finding container 7262d5d7cb34f3cae870136afaf64f153dc254d98fb23e298fae05f5200563f1: Status 404 returned error can't find the container with id 7262d5d7cb34f3cae870136afaf64f153dc254d98fb23e298fae05f5200563f1 Oct 14 09:04:59 crc kubenswrapper[5058]: I1014 09:04:59.381433 5058 generic.go:334] "Generic (PLEG): container finished" podID="aad1878c-a41e-4233-9828-414e241e1a70" containerID="18b7726ea8ebc30436362674b35bd5d67f1f1bfd54e722d077c9b59928a76802" exitCode=0 Oct 14 09:04:59 crc kubenswrapper[5058]: I1014 09:04:59.381512 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7e14-account-create-mj6zp" event={"ID":"aad1878c-a41e-4233-9828-414e241e1a70","Type":"ContainerDied","Data":"18b7726ea8ebc30436362674b35bd5d67f1f1bfd54e722d077c9b59928a76802"} Oct 14 09:04:59 crc kubenswrapper[5058]: I1014 09:04:59.381554 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7e14-account-create-mj6zp" event={"ID":"aad1878c-a41e-4233-9828-414e241e1a70","Type":"ContainerStarted","Data":"7262d5d7cb34f3cae870136afaf64f153dc254d98fb23e298fae05f5200563f1"} Oct 14 09:05:00 crc kubenswrapper[5058]: I1014 09:05:00.879024 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7e14-account-create-mj6zp" Oct 14 09:05:00 crc kubenswrapper[5058]: I1014 09:05:00.974825 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlhxb\" (UniqueName: \"kubernetes.io/projected/aad1878c-a41e-4233-9828-414e241e1a70-kube-api-access-nlhxb\") pod \"aad1878c-a41e-4233-9828-414e241e1a70\" (UID: \"aad1878c-a41e-4233-9828-414e241e1a70\") " Oct 14 09:05:00 crc kubenswrapper[5058]: I1014 09:05:00.985155 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad1878c-a41e-4233-9828-414e241e1a70-kube-api-access-nlhxb" (OuterVolumeSpecName: "kube-api-access-nlhxb") pod "aad1878c-a41e-4233-9828-414e241e1a70" (UID: "aad1878c-a41e-4233-9828-414e241e1a70"). InnerVolumeSpecName "kube-api-access-nlhxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:05:01 crc kubenswrapper[5058]: I1014 09:05:01.077431 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlhxb\" (UniqueName: \"kubernetes.io/projected/aad1878c-a41e-4233-9828-414e241e1a70-kube-api-access-nlhxb\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:01 crc kubenswrapper[5058]: I1014 09:05:01.406683 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7e14-account-create-mj6zp" event={"ID":"aad1878c-a41e-4233-9828-414e241e1a70","Type":"ContainerDied","Data":"7262d5d7cb34f3cae870136afaf64f153dc254d98fb23e298fae05f5200563f1"} Oct 14 09:05:01 crc kubenswrapper[5058]: I1014 09:05:01.406718 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7262d5d7cb34f3cae870136afaf64f153dc254d98fb23e298fae05f5200563f1" Oct 14 09:05:01 crc kubenswrapper[5058]: I1014 09:05:01.406840 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7e14-account-create-mj6zp" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.084019 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-fm9pt"] Oct 14 09:05:03 crc kubenswrapper[5058]: E1014 09:05:03.084820 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1878c-a41e-4233-9828-414e241e1a70" containerName="mariadb-account-create" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.084838 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1878c-a41e-4233-9828-414e241e1a70" containerName="mariadb-account-create" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.085046 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1878c-a41e-4233-9828-414e241e1a70" containerName="mariadb-account-create" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.085710 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.087185 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-m8dqr" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.087683 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.088394 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.104550 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fm9pt"] Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.122016 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7457c8cd99-jsz2c"] Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.123553 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.162191 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457c8cd99-jsz2c"] Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.218300 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-config\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.218340 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-dns-svc\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.218389 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btkwc\" (UniqueName: \"kubernetes.io/projected/82c6fa10-836b-4dd1-bf93-0efd2101ad05-kube-api-access-btkwc\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.218424 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-config-data\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.218556 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-sb\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.218679 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c6fa10-836b-4dd1-bf93-0efd2101ad05-logs\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.218770 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-scripts\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.219011 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-combined-ca-bundle\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.219104 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l78xl\" (UniqueName: \"kubernetes.io/projected/e5110179-f50d-468f-a925-adb04c2d62db-kube-api-access-l78xl\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.219214 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-nb\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.320941 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-nb\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.321003 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-config\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.321024 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-dns-svc\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.321059 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btkwc\" (UniqueName: \"kubernetes.io/projected/82c6fa10-836b-4dd1-bf93-0efd2101ad05-kube-api-access-btkwc\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.321094 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-config-data\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.321126 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-sb\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.321159 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c6fa10-836b-4dd1-bf93-0efd2101ad05-logs\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.321183 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-scripts\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.321210 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-combined-ca-bundle\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.321240 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l78xl\" (UniqueName: \"kubernetes.io/projected/e5110179-f50d-468f-a925-adb04c2d62db-kube-api-access-l78xl\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.322136 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c6fa10-836b-4dd1-bf93-0efd2101ad05-logs\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.322357 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-nb\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.322552 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-dns-svc\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.322887 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-config\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.323053 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-sb\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.326446 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-combined-ca-bundle\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.326662 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-scripts\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.328048 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-config-data\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.339702 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l78xl\" (UniqueName: \"kubernetes.io/projected/e5110179-f50d-468f-a925-adb04c2d62db-kube-api-access-l78xl\") pod \"dnsmasq-dns-7457c8cd99-jsz2c\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.341763 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btkwc\" (UniqueName: \"kubernetes.io/projected/82c6fa10-836b-4dd1-bf93-0efd2101ad05-kube-api-access-btkwc\") pod \"placement-db-sync-fm9pt\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.402314 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.441464 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.655944 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.656265 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.871037 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fm9pt"] Oct 14 09:05:03 crc kubenswrapper[5058]: W1014 09:05:03.875730 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c6fa10_836b_4dd1_bf93_0efd2101ad05.slice/crio-1275fdb67b94e682780ae5e76dda93b93cae1e663379c100c45eeb34ae798e7f WatchSource:0}: Error finding container 1275fdb67b94e682780ae5e76dda93b93cae1e663379c100c45eeb34ae798e7f: Status 404 returned error can't find the container with id 1275fdb67b94e682780ae5e76dda93b93cae1e663379c100c45eeb34ae798e7f Oct 14 09:05:03 crc kubenswrapper[5058]: I1014 09:05:03.970842 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457c8cd99-jsz2c"] Oct 14 09:05:03 crc kubenswrapper[5058]: W1014 09:05:03.973184 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5110179_f50d_468f_a925_adb04c2d62db.slice/crio-6659313c254ab97265b31739a7d432354ca3e646e97c1f9ba3b666e2190dc9fb WatchSource:0}: Error finding container 6659313c254ab97265b31739a7d432354ca3e646e97c1f9ba3b666e2190dc9fb: Status 404 returned error can't find the container with id 6659313c254ab97265b31739a7d432354ca3e646e97c1f9ba3b666e2190dc9fb Oct 14 09:05:04 crc kubenswrapper[5058]: I1014 09:05:04.441259 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fm9pt" event={"ID":"82c6fa10-836b-4dd1-bf93-0efd2101ad05","Type":"ContainerStarted","Data":"1275fdb67b94e682780ae5e76dda93b93cae1e663379c100c45eeb34ae798e7f"} Oct 14 09:05:04 crc kubenswrapper[5058]: I1014 09:05:04.443627 5058 generic.go:334] "Generic (PLEG): container finished" podID="e5110179-f50d-468f-a925-adb04c2d62db" containerID="ecb9dc79b19471c03ca5914d222875a6991c8225e341d415711ca03b171607d5" exitCode=0 Oct 14 09:05:04 crc kubenswrapper[5058]: I1014 09:05:04.443667 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" event={"ID":"e5110179-f50d-468f-a925-adb04c2d62db","Type":"ContainerDied","Data":"ecb9dc79b19471c03ca5914d222875a6991c8225e341d415711ca03b171607d5"} Oct 14 09:05:04 crc kubenswrapper[5058]: I1014 09:05:04.443694 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" event={"ID":"e5110179-f50d-468f-a925-adb04c2d62db","Type":"ContainerStarted","Data":"6659313c254ab97265b31739a7d432354ca3e646e97c1f9ba3b666e2190dc9fb"} Oct 14 09:05:05 crc kubenswrapper[5058]: I1014 09:05:05.454723 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" event={"ID":"e5110179-f50d-468f-a925-adb04c2d62db","Type":"ContainerStarted","Data":"539db843c3a72cd66c250fb1d6653beaa3a70150a825c024b66a71d2af7b3208"} Oct 14 09:05:05 crc kubenswrapper[5058]: I1014 09:05:05.455160 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:05 crc kubenswrapper[5058]: I1014 09:05:05.477398 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" podStartSLOduration=2.477382801 podStartE2EDuration="2.477382801s" podCreationTimestamp="2025-10-14 09:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:05:05.470745191 +0000 UTC m=+8253.381829017" watchObservedRunningTime="2025-10-14 09:05:05.477382801 +0000 UTC m=+8253.388466607" Oct 14 09:05:07 crc kubenswrapper[5058]: I1014 09:05:07.482930 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fm9pt" event={"ID":"82c6fa10-836b-4dd1-bf93-0efd2101ad05","Type":"ContainerStarted","Data":"59ca14242bbc69f17c933e8f9c5d2561578fac5cef66c06c35422cbd60b0944c"} Oct 14 09:05:07 crc kubenswrapper[5058]: I1014 09:05:07.509645 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-fm9pt" podStartSLOduration=1.211272442 podStartE2EDuration="4.509626337s" podCreationTimestamp="2025-10-14 09:05:03 +0000 UTC" firstStartedPulling="2025-10-14 09:05:03.878627481 +0000 UTC m=+8251.789711287" lastFinishedPulling="2025-10-14 09:05:07.176981376 +0000 UTC m=+8255.088065182" observedRunningTime="2025-10-14 09:05:07.505697284 +0000 UTC m=+8255.416781160" watchObservedRunningTime="2025-10-14 09:05:07.509626337 +0000 UTC m=+8255.420710143" Oct 14 09:05:09 crc kubenswrapper[5058]: I1014 09:05:09.511594 5058 generic.go:334] "Generic (PLEG): container finished" podID="82c6fa10-836b-4dd1-bf93-0efd2101ad05" containerID="59ca14242bbc69f17c933e8f9c5d2561578fac5cef66c06c35422cbd60b0944c" exitCode=0 Oct 14 09:05:09 crc kubenswrapper[5058]: I1014 09:05:09.511697 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fm9pt" event={"ID":"82c6fa10-836b-4dd1-bf93-0efd2101ad05","Type":"ContainerDied","Data":"59ca14242bbc69f17c933e8f9c5d2561578fac5cef66c06c35422cbd60b0944c"} Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.013858 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.085417 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-combined-ca-bundle\") pod \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.085567 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-config-data\") pod \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.085667 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c6fa10-836b-4dd1-bf93-0efd2101ad05-logs\") pod \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.085897 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-scripts\") pod \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.086853 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82c6fa10-836b-4dd1-bf93-0efd2101ad05-logs" (OuterVolumeSpecName: "logs") pod "82c6fa10-836b-4dd1-bf93-0efd2101ad05" (UID: "82c6fa10-836b-4dd1-bf93-0efd2101ad05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.086032 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btkwc\" (UniqueName: \"kubernetes.io/projected/82c6fa10-836b-4dd1-bf93-0efd2101ad05-kube-api-access-btkwc\") pod \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\" (UID: \"82c6fa10-836b-4dd1-bf93-0efd2101ad05\") " Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.087585 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c6fa10-836b-4dd1-bf93-0efd2101ad05-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.098090 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-scripts" (OuterVolumeSpecName: "scripts") pod "82c6fa10-836b-4dd1-bf93-0efd2101ad05" (UID: "82c6fa10-836b-4dd1-bf93-0efd2101ad05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.098766 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c6fa10-836b-4dd1-bf93-0efd2101ad05-kube-api-access-btkwc" (OuterVolumeSpecName: "kube-api-access-btkwc") pod "82c6fa10-836b-4dd1-bf93-0efd2101ad05" (UID: "82c6fa10-836b-4dd1-bf93-0efd2101ad05"). InnerVolumeSpecName "kube-api-access-btkwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.138389 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-config-data" (OuterVolumeSpecName: "config-data") pod "82c6fa10-836b-4dd1-bf93-0efd2101ad05" (UID: "82c6fa10-836b-4dd1-bf93-0efd2101ad05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.140071 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82c6fa10-836b-4dd1-bf93-0efd2101ad05" (UID: "82c6fa10-836b-4dd1-bf93-0efd2101ad05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.189680 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.189878 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btkwc\" (UniqueName: \"kubernetes.io/projected/82c6fa10-836b-4dd1-bf93-0efd2101ad05-kube-api-access-btkwc\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.189905 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.189939 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c6fa10-836b-4dd1-bf93-0efd2101ad05-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.539708 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fm9pt" event={"ID":"82c6fa10-836b-4dd1-bf93-0efd2101ad05","Type":"ContainerDied","Data":"1275fdb67b94e682780ae5e76dda93b93cae1e663379c100c45eeb34ae798e7f"} Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.539756 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1275fdb67b94e682780ae5e76dda93b93cae1e663379c100c45eeb34ae798e7f" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.539872 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fm9pt" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.732234 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-667758b98d-p65v6"] Oct 14 09:05:11 crc kubenswrapper[5058]: E1014 09:05:11.732763 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c6fa10-836b-4dd1-bf93-0efd2101ad05" containerName="placement-db-sync" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.732788 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c6fa10-836b-4dd1-bf93-0efd2101ad05" containerName="placement-db-sync" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.733183 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c6fa10-836b-4dd1-bf93-0efd2101ad05" containerName="placement-db-sync" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.734831 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.740000 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.740443 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.742147 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-m8dqr" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.754642 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-667758b98d-p65v6"] Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.803241 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjkqw\" (UniqueName: \"kubernetes.io/projected/fe935e0a-7cdb-4711-b7f8-6e8315568e92-kube-api-access-fjkqw\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.803326 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe935e0a-7cdb-4711-b7f8-6e8315568e92-logs\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.803373 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe935e0a-7cdb-4711-b7f8-6e8315568e92-scripts\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.803405 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe935e0a-7cdb-4711-b7f8-6e8315568e92-combined-ca-bundle\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.803491 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe935e0a-7cdb-4711-b7f8-6e8315568e92-config-data\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.905717 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjkqw\" (UniqueName: \"kubernetes.io/projected/fe935e0a-7cdb-4711-b7f8-6e8315568e92-kube-api-access-fjkqw\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.905820 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe935e0a-7cdb-4711-b7f8-6e8315568e92-logs\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.905870 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe935e0a-7cdb-4711-b7f8-6e8315568e92-scripts\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.905903 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe935e0a-7cdb-4711-b7f8-6e8315568e92-combined-ca-bundle\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.905988 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe935e0a-7cdb-4711-b7f8-6e8315568e92-config-data\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.906414 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe935e0a-7cdb-4711-b7f8-6e8315568e92-logs\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.910650 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe935e0a-7cdb-4711-b7f8-6e8315568e92-scripts\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.911862 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe935e0a-7cdb-4711-b7f8-6e8315568e92-combined-ca-bundle\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.912301 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe935e0a-7cdb-4711-b7f8-6e8315568e92-config-data\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:11 crc kubenswrapper[5058]: I1014 09:05:11.931084 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjkqw\" (UniqueName: \"kubernetes.io/projected/fe935e0a-7cdb-4711-b7f8-6e8315568e92-kube-api-access-fjkqw\") pod \"placement-667758b98d-p65v6\" (UID: \"fe935e0a-7cdb-4711-b7f8-6e8315568e92\") " pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:12 crc kubenswrapper[5058]: I1014 09:05:12.058479 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:12 crc kubenswrapper[5058]: I1014 09:05:12.583750 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-667758b98d-p65v6"] Oct 14 09:05:13 crc kubenswrapper[5058]: I1014 09:05:13.444093 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:05:13 crc kubenswrapper[5058]: I1014 09:05:13.541928 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86597f77cc-mngmw"] Oct 14 09:05:13 crc kubenswrapper[5058]: I1014 09:05:13.542637 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" podUID="94950ffc-adfd-4761-99a6-a458ebae9aa6" containerName="dnsmasq-dns" containerID="cri-o://82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6" gracePeriod=10 Oct 14 09:05:13 crc kubenswrapper[5058]: I1014 09:05:13.565044 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-667758b98d-p65v6" event={"ID":"fe935e0a-7cdb-4711-b7f8-6e8315568e92","Type":"ContainerStarted","Data":"b94859703f531c6889410d7b6f36d631fcd0efaf3aa6697a48070dff41b26023"} Oct 14 09:05:13 crc kubenswrapper[5058]: I1014 09:05:13.565088 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-667758b98d-p65v6" event={"ID":"fe935e0a-7cdb-4711-b7f8-6e8315568e92","Type":"ContainerStarted","Data":"ece38e1239bee87e01c71d1b65ebca1751a58d1b1235fb8544ac7bdf15f10382"} Oct 14 09:05:13 crc kubenswrapper[5058]: I1014 09:05:13.565097 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-667758b98d-p65v6" event={"ID":"fe935e0a-7cdb-4711-b7f8-6e8315568e92","Type":"ContainerStarted","Data":"08475a1761bf37889cee86706968f381b74cb0429099af3231142ac666910f9c"} Oct 14 09:05:13 crc kubenswrapper[5058]: I1014 09:05:13.565921 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:13 crc kubenswrapper[5058]: I1014 09:05:13.565961 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:13 crc kubenswrapper[5058]: I1014 09:05:13.600818 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-667758b98d-p65v6" podStartSLOduration=2.600787322 podStartE2EDuration="2.600787322s" podCreationTimestamp="2025-10-14 09:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:05:13.591603799 +0000 UTC m=+8261.502687675" watchObservedRunningTime="2025-10-14 09:05:13.600787322 +0000 UTC m=+8261.511871128" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.008282 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.050367 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jxx7\" (UniqueName: \"kubernetes.io/projected/94950ffc-adfd-4761-99a6-a458ebae9aa6-kube-api-access-8jxx7\") pod \"94950ffc-adfd-4761-99a6-a458ebae9aa6\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.050480 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-dns-svc\") pod \"94950ffc-adfd-4761-99a6-a458ebae9aa6\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.050529 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-config\") pod \"94950ffc-adfd-4761-99a6-a458ebae9aa6\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.050596 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-sb\") pod \"94950ffc-adfd-4761-99a6-a458ebae9aa6\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.050679 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-nb\") pod \"94950ffc-adfd-4761-99a6-a458ebae9aa6\" (UID: \"94950ffc-adfd-4761-99a6-a458ebae9aa6\") " Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.059024 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94950ffc-adfd-4761-99a6-a458ebae9aa6-kube-api-access-8jxx7" (OuterVolumeSpecName: "kube-api-access-8jxx7") pod "94950ffc-adfd-4761-99a6-a458ebae9aa6" (UID: "94950ffc-adfd-4761-99a6-a458ebae9aa6"). InnerVolumeSpecName "kube-api-access-8jxx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.105337 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-config" (OuterVolumeSpecName: "config") pod "94950ffc-adfd-4761-99a6-a458ebae9aa6" (UID: "94950ffc-adfd-4761-99a6-a458ebae9aa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.105464 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94950ffc-adfd-4761-99a6-a458ebae9aa6" (UID: "94950ffc-adfd-4761-99a6-a458ebae9aa6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.112535 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94950ffc-adfd-4761-99a6-a458ebae9aa6" (UID: "94950ffc-adfd-4761-99a6-a458ebae9aa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.135492 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94950ffc-adfd-4761-99a6-a458ebae9aa6" (UID: "94950ffc-adfd-4761-99a6-a458ebae9aa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.153917 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.153949 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.153959 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.153979 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94950ffc-adfd-4761-99a6-a458ebae9aa6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.153989 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jxx7\" (UniqueName: \"kubernetes.io/projected/94950ffc-adfd-4761-99a6-a458ebae9aa6-kube-api-access-8jxx7\") on node \"crc\" DevicePath \"\"" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.581378 5058 generic.go:334] "Generic (PLEG): container finished" podID="94950ffc-adfd-4761-99a6-a458ebae9aa6" containerID="82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6" exitCode=0 Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.581550 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" event={"ID":"94950ffc-adfd-4761-99a6-a458ebae9aa6","Type":"ContainerDied","Data":"82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6"} Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.581625 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" event={"ID":"94950ffc-adfd-4761-99a6-a458ebae9aa6","Type":"ContainerDied","Data":"1206bfe7f17e37bf1fe88482ab30eeb28ae1f62324451642eed7ecd73e5e85b5"} Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.581664 5058 scope.go:117] "RemoveContainer" containerID="82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.583349 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86597f77cc-mngmw" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.617255 5058 scope.go:117] "RemoveContainer" containerID="07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.648856 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86597f77cc-mngmw"] Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.665363 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86597f77cc-mngmw"] Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.678073 5058 scope.go:117] "RemoveContainer" containerID="82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6" Oct 14 09:05:14 crc kubenswrapper[5058]: E1014 09:05:14.678782 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6\": container with ID starting with 82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6 not found: ID does not exist" containerID="82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.678865 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6"} err="failed to get container status \"82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6\": rpc error: code = NotFound desc = could not find container \"82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6\": container with ID starting with 82593974a4002254f8f21c8b1e0bdc00b75ee511a24aa07587be80818c487eb6 not found: ID does not exist" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.678904 5058 scope.go:117] "RemoveContainer" containerID="07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1" Oct 14 09:05:14 crc kubenswrapper[5058]: E1014 09:05:14.679520 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1\": container with ID starting with 07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1 not found: ID does not exist" containerID="07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.679562 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1"} err="failed to get container status \"07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1\": rpc error: code = NotFound desc = could not find container \"07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1\": container with ID starting with 07d37c881c2e63dc092d2c04ec48de7466f9e4bf35662754de68720161c60ee1 not found: ID does not exist" Oct 14 09:05:14 crc kubenswrapper[5058]: I1014 09:05:14.812089 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94950ffc-adfd-4761-99a6-a458ebae9aa6" path="/var/lib/kubelet/pods/94950ffc-adfd-4761-99a6-a458ebae9aa6/volumes" Oct 14 09:05:33 crc kubenswrapper[5058]: I1014 09:05:33.656305 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:05:33 crc kubenswrapper[5058]: I1014 09:05:33.657912 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:05:43 crc kubenswrapper[5058]: I1014 09:05:43.053089 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-667758b98d-p65v6" Oct 14 09:05:43 crc kubenswrapper[5058]: I1014 09:05:43.059056 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-667758b98d-p65v6" Oct 14 09:06:03 crc kubenswrapper[5058]: I1014 09:06:03.656419 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:06:03 crc kubenswrapper[5058]: I1014 09:06:03.657067 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:06:03 crc kubenswrapper[5058]: I1014 09:06:03.657133 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:06:03 crc kubenswrapper[5058]: I1014 09:06:03.658152 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"981e068c2e90f44df2f90325ad6a7c204ea7b27ef02d846b87dabb2374178337"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:06:03 crc kubenswrapper[5058]: I1014 09:06:03.658233 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://981e068c2e90f44df2f90325ad6a7c204ea7b27ef02d846b87dabb2374178337" gracePeriod=600 Oct 14 09:06:04 crc kubenswrapper[5058]: I1014 09:06:04.231987 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="981e068c2e90f44df2f90325ad6a7c204ea7b27ef02d846b87dabb2374178337" exitCode=0 Oct 14 09:06:04 crc kubenswrapper[5058]: I1014 09:06:04.232041 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"981e068c2e90f44df2f90325ad6a7c204ea7b27ef02d846b87dabb2374178337"} Oct 14 09:06:04 crc kubenswrapper[5058]: I1014 09:06:04.232563 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74"} Oct 14 09:06:04 crc kubenswrapper[5058]: I1014 09:06:04.232605 5058 scope.go:117] "RemoveContainer" containerID="0247658c4f3732f5ac4eefc51647c977e8e2d7d8c613ba8832f2902445b250e1" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.567249 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hswx5"] Oct 14 09:06:10 crc kubenswrapper[5058]: E1014 09:06:10.568152 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94950ffc-adfd-4761-99a6-a458ebae9aa6" containerName="init" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.568169 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="94950ffc-adfd-4761-99a6-a458ebae9aa6" containerName="init" Oct 14 09:06:10 crc kubenswrapper[5058]: E1014 09:06:10.568209 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94950ffc-adfd-4761-99a6-a458ebae9aa6" containerName="dnsmasq-dns" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.568218 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="94950ffc-adfd-4761-99a6-a458ebae9aa6" containerName="dnsmasq-dns" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.568489 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="94950ffc-adfd-4761-99a6-a458ebae9aa6" containerName="dnsmasq-dns" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.569723 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hswx5" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.577358 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hswx5"] Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.664010 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-r9nqc"] Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.666724 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r9nqc" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.667896 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmkr\" (UniqueName: \"kubernetes.io/projected/da88b0aa-ae98-4648-9cfa-f93f1b6030aa-kube-api-access-djmkr\") pod \"nova-cell0-db-create-r9nqc\" (UID: \"da88b0aa-ae98-4648-9cfa-f93f1b6030aa\") " pod="openstack/nova-cell0-db-create-r9nqc" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.667989 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q98ss\" (UniqueName: \"kubernetes.io/projected/446b3620-cf2b-4b00-b483-a7cf6f51a279-kube-api-access-q98ss\") pod \"nova-api-db-create-hswx5\" (UID: \"446b3620-cf2b-4b00-b483-a7cf6f51a279\") " pod="openstack/nova-api-db-create-hswx5" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.673567 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-r9nqc"] Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.769129 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djmkr\" (UniqueName: \"kubernetes.io/projected/da88b0aa-ae98-4648-9cfa-f93f1b6030aa-kube-api-access-djmkr\") pod \"nova-cell0-db-create-r9nqc\" (UID: \"da88b0aa-ae98-4648-9cfa-f93f1b6030aa\") " pod="openstack/nova-cell0-db-create-r9nqc" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.769223 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q98ss\" (UniqueName: \"kubernetes.io/projected/446b3620-cf2b-4b00-b483-a7cf6f51a279-kube-api-access-q98ss\") pod \"nova-api-db-create-hswx5\" (UID: \"446b3620-cf2b-4b00-b483-a7cf6f51a279\") " pod="openstack/nova-api-db-create-hswx5" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.803307 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djmkr\" (UniqueName: \"kubernetes.io/projected/da88b0aa-ae98-4648-9cfa-f93f1b6030aa-kube-api-access-djmkr\") pod \"nova-cell0-db-create-r9nqc\" (UID: \"da88b0aa-ae98-4648-9cfa-f93f1b6030aa\") " pod="openstack/nova-cell0-db-create-r9nqc" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.808457 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell2-db-create-595tr"] Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.810413 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-db-create-595tr" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.819439 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q98ss\" (UniqueName: \"kubernetes.io/projected/446b3620-cf2b-4b00-b483-a7cf6f51a279-kube-api-access-q98ss\") pod \"nova-api-db-create-hswx5\" (UID: \"446b3620-cf2b-4b00-b483-a7cf6f51a279\") " pod="openstack/nova-api-db-create-hswx5" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.836328 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-db-create-595tr"] Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.863854 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell3-db-create-98fgt"] Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.866341 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-db-create-98fgt" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.871074 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-db-create-98fgt"] Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.871459 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnglf\" (UniqueName: \"kubernetes.io/projected/5f335715-e559-42b5-8609-6c9493ed63d1-kube-api-access-lnglf\") pod \"nova-cell2-db-create-595tr\" (UID: \"5f335715-e559-42b5-8609-6c9493ed63d1\") " pod="openstack/nova-cell2-db-create-595tr" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.871560 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdlh\" (UniqueName: \"kubernetes.io/projected/9ed56fa1-6905-4f81-94f4-68d73e91170d-kube-api-access-dhdlh\") pod \"nova-cell3-db-create-98fgt\" (UID: \"9ed56fa1-6905-4f81-94f4-68d73e91170d\") " pod="openstack/nova-cell3-db-create-98fgt" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.920829 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hswx5" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.963897 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-st5rv"] Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.965539 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-st5rv" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.973545 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-st5rv"] Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.974161 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnglf\" (UniqueName: \"kubernetes.io/projected/5f335715-e559-42b5-8609-6c9493ed63d1-kube-api-access-lnglf\") pod \"nova-cell2-db-create-595tr\" (UID: \"5f335715-e559-42b5-8609-6c9493ed63d1\") " pod="openstack/nova-cell2-db-create-595tr" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.974250 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhdlh\" (UniqueName: \"kubernetes.io/projected/9ed56fa1-6905-4f81-94f4-68d73e91170d-kube-api-access-dhdlh\") pod \"nova-cell3-db-create-98fgt\" (UID: \"9ed56fa1-6905-4f81-94f4-68d73e91170d\") " pod="openstack/nova-cell3-db-create-98fgt" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.994748 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r9nqc" Oct 14 09:06:10 crc kubenswrapper[5058]: I1014 09:06:10.995407 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnglf\" (UniqueName: \"kubernetes.io/projected/5f335715-e559-42b5-8609-6c9493ed63d1-kube-api-access-lnglf\") pod \"nova-cell2-db-create-595tr\" (UID: \"5f335715-e559-42b5-8609-6c9493ed63d1\") " pod="openstack/nova-cell2-db-create-595tr" Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.000712 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhdlh\" (UniqueName: \"kubernetes.io/projected/9ed56fa1-6905-4f81-94f4-68d73e91170d-kube-api-access-dhdlh\") pod \"nova-cell3-db-create-98fgt\" (UID: \"9ed56fa1-6905-4f81-94f4-68d73e91170d\") " pod="openstack/nova-cell3-db-create-98fgt" Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.075659 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbc6f\" (UniqueName: \"kubernetes.io/projected/58b5ed5c-0a6b-41d8-b24c-3916aff7970e-kube-api-access-tbc6f\") pod \"nova-cell1-db-create-st5rv\" (UID: \"58b5ed5c-0a6b-41d8-b24c-3916aff7970e\") " pod="openstack/nova-cell1-db-create-st5rv" Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.177885 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-db-create-595tr" Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.178065 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbc6f\" (UniqueName: \"kubernetes.io/projected/58b5ed5c-0a6b-41d8-b24c-3916aff7970e-kube-api-access-tbc6f\") pod \"nova-cell1-db-create-st5rv\" (UID: \"58b5ed5c-0a6b-41d8-b24c-3916aff7970e\") " pod="openstack/nova-cell1-db-create-st5rv" Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.189443 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-db-create-98fgt" Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.196628 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbc6f\" (UniqueName: \"kubernetes.io/projected/58b5ed5c-0a6b-41d8-b24c-3916aff7970e-kube-api-access-tbc6f\") pod \"nova-cell1-db-create-st5rv\" (UID: \"58b5ed5c-0a6b-41d8-b24c-3916aff7970e\") " pod="openstack/nova-cell1-db-create-st5rv" Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.400167 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-st5rv" Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.420207 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hswx5"] Oct 14 09:06:11 crc kubenswrapper[5058]: W1014 09:06:11.443972 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod446b3620_cf2b_4b00_b483_a7cf6f51a279.slice/crio-f316712d0cdd8e98a6f9268ce55ad98ec6d71a264a241c4ede14e44c50b03942 WatchSource:0}: Error finding container f316712d0cdd8e98a6f9268ce55ad98ec6d71a264a241c4ede14e44c50b03942: Status 404 returned error can't find the container with id f316712d0cdd8e98a6f9268ce55ad98ec6d71a264a241c4ede14e44c50b03942 Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.536745 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-r9nqc"] Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.678844 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-db-create-98fgt"] Oct 14 09:06:11 crc kubenswrapper[5058]: W1014 09:06:11.686846 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ed56fa1_6905_4f81_94f4_68d73e91170d.slice/crio-c67ef2ed3e165d414be58442fa0d7a2a27c4a44ae54df659af3b5ea94e4ba1bd WatchSource:0}: Error finding container c67ef2ed3e165d414be58442fa0d7a2a27c4a44ae54df659af3b5ea94e4ba1bd: Status 404 returned error can't find the container with id c67ef2ed3e165d414be58442fa0d7a2a27c4a44ae54df659af3b5ea94e4ba1bd Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.688274 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-db-create-595tr"] Oct 14 09:06:11 crc kubenswrapper[5058]: I1014 09:06:11.890394 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-st5rv"] Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.339896 5058 generic.go:334] "Generic (PLEG): container finished" podID="446b3620-cf2b-4b00-b483-a7cf6f51a279" containerID="935b2eeb57a450fe5e873c969b54e4cc5e2d0e5110234740359d692d8267b9d5" exitCode=0 Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.340027 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hswx5" event={"ID":"446b3620-cf2b-4b00-b483-a7cf6f51a279","Type":"ContainerDied","Data":"935b2eeb57a450fe5e873c969b54e4cc5e2d0e5110234740359d692d8267b9d5"} Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.340078 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hswx5" event={"ID":"446b3620-cf2b-4b00-b483-a7cf6f51a279","Type":"ContainerStarted","Data":"f316712d0cdd8e98a6f9268ce55ad98ec6d71a264a241c4ede14e44c50b03942"} Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.343015 5058 generic.go:334] "Generic (PLEG): container finished" podID="5f335715-e559-42b5-8609-6c9493ed63d1" containerID="e2f5f1cde1ba0d7dd8b1fb7a13f1a0f22df0a6f9126b2a47ce0d667d9a2f15d2" exitCode=0 Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.343137 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-db-create-595tr" event={"ID":"5f335715-e559-42b5-8609-6c9493ed63d1","Type":"ContainerDied","Data":"e2f5f1cde1ba0d7dd8b1fb7a13f1a0f22df0a6f9126b2a47ce0d667d9a2f15d2"} Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.343165 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-db-create-595tr" event={"ID":"5f335715-e559-42b5-8609-6c9493ed63d1","Type":"ContainerStarted","Data":"ddc115833cf010dd417ed789ac46d6dc07819ee04a9bcd29e2d8d976e2776215"} Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.345260 5058 generic.go:334] "Generic (PLEG): container finished" podID="9ed56fa1-6905-4f81-94f4-68d73e91170d" containerID="69ce234c4cdcf626558316e24ab8e7c7f0d7a97c54f15e052a703c178d4986cd" exitCode=0 Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.345291 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-db-create-98fgt" event={"ID":"9ed56fa1-6905-4f81-94f4-68d73e91170d","Type":"ContainerDied","Data":"69ce234c4cdcf626558316e24ab8e7c7f0d7a97c54f15e052a703c178d4986cd"} Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.345316 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-db-create-98fgt" event={"ID":"9ed56fa1-6905-4f81-94f4-68d73e91170d","Type":"ContainerStarted","Data":"c67ef2ed3e165d414be58442fa0d7a2a27c4a44ae54df659af3b5ea94e4ba1bd"} Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.346994 5058 generic.go:334] "Generic (PLEG): container finished" podID="58b5ed5c-0a6b-41d8-b24c-3916aff7970e" containerID="660d83f2c8d270d37baafa5f948b6a276118864f9d24dc25aa05bc1d52a15718" exitCode=0 Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.347072 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-st5rv" event={"ID":"58b5ed5c-0a6b-41d8-b24c-3916aff7970e","Type":"ContainerDied","Data":"660d83f2c8d270d37baafa5f948b6a276118864f9d24dc25aa05bc1d52a15718"} Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.347114 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-st5rv" event={"ID":"58b5ed5c-0a6b-41d8-b24c-3916aff7970e","Type":"ContainerStarted","Data":"f2137deedf705f891db482098f2403d6d6e3176039f7c076484455e909e5bd11"} Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.348952 5058 generic.go:334] "Generic (PLEG): container finished" podID="da88b0aa-ae98-4648-9cfa-f93f1b6030aa" containerID="e4a5b29b94a032d4debeb8b99ee28cf79c6557d64aedc7bad22f04b37eeeca19" exitCode=0 Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.349024 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-r9nqc" event={"ID":"da88b0aa-ae98-4648-9cfa-f93f1b6030aa","Type":"ContainerDied","Data":"e4a5b29b94a032d4debeb8b99ee28cf79c6557d64aedc7bad22f04b37eeeca19"} Oct 14 09:06:12 crc kubenswrapper[5058]: I1014 09:06:12.349052 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-r9nqc" event={"ID":"da88b0aa-ae98-4648-9cfa-f93f1b6030aa","Type":"ContainerStarted","Data":"1dc36ba379cda790f970921e5ef1d1113e41824c3a59ddda18dc903dacb76cd5"} Oct 14 09:06:13 crc kubenswrapper[5058]: I1014 09:06:13.831861 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-st5rv" Oct 14 09:06:13 crc kubenswrapper[5058]: I1014 09:06:13.951395 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbc6f\" (UniqueName: \"kubernetes.io/projected/58b5ed5c-0a6b-41d8-b24c-3916aff7970e-kube-api-access-tbc6f\") pod \"58b5ed5c-0a6b-41d8-b24c-3916aff7970e\" (UID: \"58b5ed5c-0a6b-41d8-b24c-3916aff7970e\") " Oct 14 09:06:13 crc kubenswrapper[5058]: I1014 09:06:13.956427 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b5ed5c-0a6b-41d8-b24c-3916aff7970e-kube-api-access-tbc6f" (OuterVolumeSpecName: "kube-api-access-tbc6f") pod "58b5ed5c-0a6b-41d8-b24c-3916aff7970e" (UID: "58b5ed5c-0a6b-41d8-b24c-3916aff7970e"). InnerVolumeSpecName "kube-api-access-tbc6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:13 crc kubenswrapper[5058]: I1014 09:06:13.999017 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hswx5" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.007826 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-db-create-595tr" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.021473 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r9nqc" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.034779 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-db-create-98fgt" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.053848 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbc6f\" (UniqueName: \"kubernetes.io/projected/58b5ed5c-0a6b-41d8-b24c-3916aff7970e-kube-api-access-tbc6f\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.155164 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djmkr\" (UniqueName: \"kubernetes.io/projected/da88b0aa-ae98-4648-9cfa-f93f1b6030aa-kube-api-access-djmkr\") pod \"da88b0aa-ae98-4648-9cfa-f93f1b6030aa\" (UID: \"da88b0aa-ae98-4648-9cfa-f93f1b6030aa\") " Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.155526 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnglf\" (UniqueName: \"kubernetes.io/projected/5f335715-e559-42b5-8609-6c9493ed63d1-kube-api-access-lnglf\") pod \"5f335715-e559-42b5-8609-6c9493ed63d1\" (UID: \"5f335715-e559-42b5-8609-6c9493ed63d1\") " Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.155565 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q98ss\" (UniqueName: \"kubernetes.io/projected/446b3620-cf2b-4b00-b483-a7cf6f51a279-kube-api-access-q98ss\") pod \"446b3620-cf2b-4b00-b483-a7cf6f51a279\" (UID: \"446b3620-cf2b-4b00-b483-a7cf6f51a279\") " Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.155597 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhdlh\" (UniqueName: \"kubernetes.io/projected/9ed56fa1-6905-4f81-94f4-68d73e91170d-kube-api-access-dhdlh\") pod \"9ed56fa1-6905-4f81-94f4-68d73e91170d\" (UID: \"9ed56fa1-6905-4f81-94f4-68d73e91170d\") " Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.158818 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f335715-e559-42b5-8609-6c9493ed63d1-kube-api-access-lnglf" (OuterVolumeSpecName: "kube-api-access-lnglf") pod "5f335715-e559-42b5-8609-6c9493ed63d1" (UID: "5f335715-e559-42b5-8609-6c9493ed63d1"). InnerVolumeSpecName "kube-api-access-lnglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.158892 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da88b0aa-ae98-4648-9cfa-f93f1b6030aa-kube-api-access-djmkr" (OuterVolumeSpecName: "kube-api-access-djmkr") pod "da88b0aa-ae98-4648-9cfa-f93f1b6030aa" (UID: "da88b0aa-ae98-4648-9cfa-f93f1b6030aa"). InnerVolumeSpecName "kube-api-access-djmkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.159423 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed56fa1-6905-4f81-94f4-68d73e91170d-kube-api-access-dhdlh" (OuterVolumeSpecName: "kube-api-access-dhdlh") pod "9ed56fa1-6905-4f81-94f4-68d73e91170d" (UID: "9ed56fa1-6905-4f81-94f4-68d73e91170d"). InnerVolumeSpecName "kube-api-access-dhdlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.160849 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446b3620-cf2b-4b00-b483-a7cf6f51a279-kube-api-access-q98ss" (OuterVolumeSpecName: "kube-api-access-q98ss") pod "446b3620-cf2b-4b00-b483-a7cf6f51a279" (UID: "446b3620-cf2b-4b00-b483-a7cf6f51a279"). InnerVolumeSpecName "kube-api-access-q98ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.258254 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djmkr\" (UniqueName: \"kubernetes.io/projected/da88b0aa-ae98-4648-9cfa-f93f1b6030aa-kube-api-access-djmkr\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.258293 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnglf\" (UniqueName: \"kubernetes.io/projected/5f335715-e559-42b5-8609-6c9493ed63d1-kube-api-access-lnglf\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.258302 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q98ss\" (UniqueName: \"kubernetes.io/projected/446b3620-cf2b-4b00-b483-a7cf6f51a279-kube-api-access-q98ss\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.258313 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhdlh\" (UniqueName: \"kubernetes.io/projected/9ed56fa1-6905-4f81-94f4-68d73e91170d-kube-api-access-dhdlh\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.376510 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hswx5" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.376513 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hswx5" event={"ID":"446b3620-cf2b-4b00-b483-a7cf6f51a279","Type":"ContainerDied","Data":"f316712d0cdd8e98a6f9268ce55ad98ec6d71a264a241c4ede14e44c50b03942"} Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.376653 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f316712d0cdd8e98a6f9268ce55ad98ec6d71a264a241c4ede14e44c50b03942" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.378817 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-db-create-595tr" event={"ID":"5f335715-e559-42b5-8609-6c9493ed63d1","Type":"ContainerDied","Data":"ddc115833cf010dd417ed789ac46d6dc07819ee04a9bcd29e2d8d976e2776215"} Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.378867 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc115833cf010dd417ed789ac46d6dc07819ee04a9bcd29e2d8d976e2776215" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.378893 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-db-create-595tr" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.381789 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-db-create-98fgt" event={"ID":"9ed56fa1-6905-4f81-94f4-68d73e91170d","Type":"ContainerDied","Data":"c67ef2ed3e165d414be58442fa0d7a2a27c4a44ae54df659af3b5ea94e4ba1bd"} Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.381868 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67ef2ed3e165d414be58442fa0d7a2a27c4a44ae54df659af3b5ea94e4ba1bd" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.381975 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-db-create-98fgt" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.384538 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-st5rv" event={"ID":"58b5ed5c-0a6b-41d8-b24c-3916aff7970e","Type":"ContainerDied","Data":"f2137deedf705f891db482098f2403d6d6e3176039f7c076484455e909e5bd11"} Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.384591 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2137deedf705f891db482098f2403d6d6e3176039f7c076484455e909e5bd11" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.384657 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-st5rv" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.392072 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-r9nqc" event={"ID":"da88b0aa-ae98-4648-9cfa-f93f1b6030aa","Type":"ContainerDied","Data":"1dc36ba379cda790f970921e5ef1d1113e41824c3a59ddda18dc903dacb76cd5"} Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.392129 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dc36ba379cda790f970921e5ef1d1113e41824c3a59ddda18dc903dacb76cd5" Oct 14 09:06:14 crc kubenswrapper[5058]: I1014 09:06:14.392150 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r9nqc" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.883954 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c786-account-create-lbwbz"] Oct 14 09:06:20 crc kubenswrapper[5058]: E1014 09:06:20.884732 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446b3620-cf2b-4b00-b483-a7cf6f51a279" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.884750 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="446b3620-cf2b-4b00-b483-a7cf6f51a279" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: E1014 09:06:20.884769 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed56fa1-6905-4f81-94f4-68d73e91170d" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.884777 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed56fa1-6905-4f81-94f4-68d73e91170d" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: E1014 09:06:20.884814 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f335715-e559-42b5-8609-6c9493ed63d1" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.884822 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f335715-e559-42b5-8609-6c9493ed63d1" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: E1014 09:06:20.884834 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da88b0aa-ae98-4648-9cfa-f93f1b6030aa" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.884842 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="da88b0aa-ae98-4648-9cfa-f93f1b6030aa" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: E1014 09:06:20.884861 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b5ed5c-0a6b-41d8-b24c-3916aff7970e" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.884869 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b5ed5c-0a6b-41d8-b24c-3916aff7970e" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.885094 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="446b3620-cf2b-4b00-b483-a7cf6f51a279" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.885120 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed56fa1-6905-4f81-94f4-68d73e91170d" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.885132 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f335715-e559-42b5-8609-6c9493ed63d1" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.885149 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="da88b0aa-ae98-4648-9cfa-f93f1b6030aa" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.885164 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b5ed5c-0a6b-41d8-b24c-3916aff7970e" containerName="mariadb-database-create" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.885888 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c786-account-create-lbwbz" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.889292 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.895714 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c786-account-create-lbwbz"] Oct 14 09:06:20 crc kubenswrapper[5058]: I1014 09:06:20.987671 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqcl\" (UniqueName: \"kubernetes.io/projected/d1043be2-dee2-4a1e-a0e9-69faae8528a8-kube-api-access-jvqcl\") pod \"nova-api-c786-account-create-lbwbz\" (UID: \"d1043be2-dee2-4a1e-a0e9-69faae8528a8\") " pod="openstack/nova-api-c786-account-create-lbwbz" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.090025 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvqcl\" (UniqueName: \"kubernetes.io/projected/d1043be2-dee2-4a1e-a0e9-69faae8528a8-kube-api-access-jvqcl\") pod \"nova-api-c786-account-create-lbwbz\" (UID: \"d1043be2-dee2-4a1e-a0e9-69faae8528a8\") " pod="openstack/nova-api-c786-account-create-lbwbz" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.092347 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-441c-account-create-r8sls"] Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.093538 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-441c-account-create-r8sls" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.095309 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.103921 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-441c-account-create-r8sls"] Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.127701 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvqcl\" (UniqueName: \"kubernetes.io/projected/d1043be2-dee2-4a1e-a0e9-69faae8528a8-kube-api-access-jvqcl\") pod \"nova-api-c786-account-create-lbwbz\" (UID: \"d1043be2-dee2-4a1e-a0e9-69faae8528a8\") " pod="openstack/nova-api-c786-account-create-lbwbz" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.178779 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3e4f-account-create-rm4rn"] Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.180146 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e4f-account-create-rm4rn" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.182939 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.189598 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e4f-account-create-rm4rn"] Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.191368 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbkjd\" (UniqueName: \"kubernetes.io/projected/eb62eadc-7258-4ae7-b4c7-1532222f9626-kube-api-access-hbkjd\") pod \"nova-cell0-441c-account-create-r8sls\" (UID: \"eb62eadc-7258-4ae7-b4c7-1532222f9626\") " pod="openstack/nova-cell0-441c-account-create-r8sls" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.221963 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c786-account-create-lbwbz" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.293832 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbkjd\" (UniqueName: \"kubernetes.io/projected/eb62eadc-7258-4ae7-b4c7-1532222f9626-kube-api-access-hbkjd\") pod \"nova-cell0-441c-account-create-r8sls\" (UID: \"eb62eadc-7258-4ae7-b4c7-1532222f9626\") " pod="openstack/nova-cell0-441c-account-create-r8sls" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.294087 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lctfp\" (UniqueName: \"kubernetes.io/projected/fa186350-f3ef-4dc8-a639-f416520c45c7-kube-api-access-lctfp\") pod \"nova-cell1-3e4f-account-create-rm4rn\" (UID: \"fa186350-f3ef-4dc8-a639-f416520c45c7\") " pod="openstack/nova-cell1-3e4f-account-create-rm4rn" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.318609 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbkjd\" (UniqueName: \"kubernetes.io/projected/eb62eadc-7258-4ae7-b4c7-1532222f9626-kube-api-access-hbkjd\") pod \"nova-cell0-441c-account-create-r8sls\" (UID: \"eb62eadc-7258-4ae7-b4c7-1532222f9626\") " pod="openstack/nova-cell0-441c-account-create-r8sls" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.339556 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell2-2491-account-create-w85vz"] Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.341256 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-2491-account-create-w85vz" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.342894 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-db-secret" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.365369 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-2491-account-create-w85vz"] Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.388910 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell3-2a5a-account-create-rfgzt"] Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.390371 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-2a5a-account-create-rfgzt" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.396727 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell3-db-secret" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.397975 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lctfp\" (UniqueName: \"kubernetes.io/projected/fa186350-f3ef-4dc8-a639-f416520c45c7-kube-api-access-lctfp\") pod \"nova-cell1-3e4f-account-create-rm4rn\" (UID: \"fa186350-f3ef-4dc8-a639-f416520c45c7\") " pod="openstack/nova-cell1-3e4f-account-create-rm4rn" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.402785 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-2a5a-account-create-rfgzt"] Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.409249 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-441c-account-create-r8sls" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.419250 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lctfp\" (UniqueName: \"kubernetes.io/projected/fa186350-f3ef-4dc8-a639-f416520c45c7-kube-api-access-lctfp\") pod \"nova-cell1-3e4f-account-create-rm4rn\" (UID: \"fa186350-f3ef-4dc8-a639-f416520c45c7\") " pod="openstack/nova-cell1-3e4f-account-create-rm4rn" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.498261 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e4f-account-create-rm4rn" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.499433 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fhm4\" (UniqueName: \"kubernetes.io/projected/973bf351-be16-427c-97f0-058be4442fbb-kube-api-access-2fhm4\") pod \"nova-cell3-2a5a-account-create-rfgzt\" (UID: \"973bf351-be16-427c-97f0-058be4442fbb\") " pod="openstack/nova-cell3-2a5a-account-create-rfgzt" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.499567 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5m5v\" (UniqueName: \"kubernetes.io/projected/80876351-2288-4166-828c-a950d07d9110-kube-api-access-g5m5v\") pod \"nova-cell2-2491-account-create-w85vz\" (UID: \"80876351-2288-4166-828c-a950d07d9110\") " pod="openstack/nova-cell2-2491-account-create-w85vz" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.601434 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5m5v\" (UniqueName: \"kubernetes.io/projected/80876351-2288-4166-828c-a950d07d9110-kube-api-access-g5m5v\") pod \"nova-cell2-2491-account-create-w85vz\" (UID: \"80876351-2288-4166-828c-a950d07d9110\") " pod="openstack/nova-cell2-2491-account-create-w85vz" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.601783 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fhm4\" (UniqueName: \"kubernetes.io/projected/973bf351-be16-427c-97f0-058be4442fbb-kube-api-access-2fhm4\") pod \"nova-cell3-2a5a-account-create-rfgzt\" (UID: \"973bf351-be16-427c-97f0-058be4442fbb\") " pod="openstack/nova-cell3-2a5a-account-create-rfgzt" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.619954 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5m5v\" (UniqueName: \"kubernetes.io/projected/80876351-2288-4166-828c-a950d07d9110-kube-api-access-g5m5v\") pod \"nova-cell2-2491-account-create-w85vz\" (UID: \"80876351-2288-4166-828c-a950d07d9110\") " pod="openstack/nova-cell2-2491-account-create-w85vz" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.620689 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fhm4\" (UniqueName: \"kubernetes.io/projected/973bf351-be16-427c-97f0-058be4442fbb-kube-api-access-2fhm4\") pod \"nova-cell3-2a5a-account-create-rfgzt\" (UID: \"973bf351-be16-427c-97f0-058be4442fbb\") " pod="openstack/nova-cell3-2a5a-account-create-rfgzt" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.685517 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-2491-account-create-w85vz" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.781944 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-2a5a-account-create-rfgzt" Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.842044 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c786-account-create-lbwbz"] Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.908123 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-441c-account-create-r8sls"] Oct 14 09:06:21 crc kubenswrapper[5058]: W1014 09:06:21.911452 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb62eadc_7258_4ae7_b4c7_1532222f9626.slice/crio-9c13627cb2423c4eddc2d012ea158f247794db147b31f286f6bac8f26aa15d2f WatchSource:0}: Error finding container 9c13627cb2423c4eddc2d012ea158f247794db147b31f286f6bac8f26aa15d2f: Status 404 returned error can't find the container with id 9c13627cb2423c4eddc2d012ea158f247794db147b31f286f6bac8f26aa15d2f Oct 14 09:06:21 crc kubenswrapper[5058]: I1014 09:06:21.985641 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e4f-account-create-rm4rn"] Oct 14 09:06:21 crc kubenswrapper[5058]: W1014 09:06:21.989448 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa186350_f3ef_4dc8_a639_f416520c45c7.slice/crio-c824935977df8cec5cc653f8ca430a657650e4e0a95b47e0bbf632c5085ded52 WatchSource:0}: Error finding container c824935977df8cec5cc653f8ca430a657650e4e0a95b47e0bbf632c5085ded52: Status 404 returned error can't find the container with id c824935977df8cec5cc653f8ca430a657650e4e0a95b47e0bbf632c5085ded52 Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.218602 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-2491-account-create-w85vz"] Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.313881 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-2a5a-account-create-rfgzt"] Oct 14 09:06:22 crc kubenswrapper[5058]: W1014 09:06:22.363400 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod973bf351_be16_427c_97f0_058be4442fbb.slice/crio-1f5a05e3d0bca129d43bf9925501528b50c14fea1e1d39f7e9b34a78de25f05f WatchSource:0}: Error finding container 1f5a05e3d0bca129d43bf9925501528b50c14fea1e1d39f7e9b34a78de25f05f: Status 404 returned error can't find the container with id 1f5a05e3d0bca129d43bf9925501528b50c14fea1e1d39f7e9b34a78de25f05f Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.482586 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-2491-account-create-w85vz" event={"ID":"80876351-2288-4166-828c-a950d07d9110","Type":"ContainerStarted","Data":"1f90b180050ee18a552736123ec36e944dc5011f8927fedc1d278cc8e482c92a"} Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.484341 5058 generic.go:334] "Generic (PLEG): container finished" podID="eb62eadc-7258-4ae7-b4c7-1532222f9626" containerID="7e2b7ca119fc366ee1c655efb09e0b864bab8938c189195edbdb764468195ab5" exitCode=0 Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.484398 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-441c-account-create-r8sls" event={"ID":"eb62eadc-7258-4ae7-b4c7-1532222f9626","Type":"ContainerDied","Data":"7e2b7ca119fc366ee1c655efb09e0b864bab8938c189195edbdb764468195ab5"} Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.484420 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-441c-account-create-r8sls" event={"ID":"eb62eadc-7258-4ae7-b4c7-1532222f9626","Type":"ContainerStarted","Data":"9c13627cb2423c4eddc2d012ea158f247794db147b31f286f6bac8f26aa15d2f"} Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.486057 5058 generic.go:334] "Generic (PLEG): container finished" podID="d1043be2-dee2-4a1e-a0e9-69faae8528a8" containerID="718a4c0b1affa687d939e9375caec7a99bcc36c4f3e7fc4b72489cc2ef180ec7" exitCode=0 Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.486122 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c786-account-create-lbwbz" event={"ID":"d1043be2-dee2-4a1e-a0e9-69faae8528a8","Type":"ContainerDied","Data":"718a4c0b1affa687d939e9375caec7a99bcc36c4f3e7fc4b72489cc2ef180ec7"} Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.486142 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c786-account-create-lbwbz" event={"ID":"d1043be2-dee2-4a1e-a0e9-69faae8528a8","Type":"ContainerStarted","Data":"833b4e2d4688c19c80a1f3cfbd69829196b019046a852c2ce66e1dec57394657"} Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.487056 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-2a5a-account-create-rfgzt" event={"ID":"973bf351-be16-427c-97f0-058be4442fbb","Type":"ContainerStarted","Data":"1f5a05e3d0bca129d43bf9925501528b50c14fea1e1d39f7e9b34a78de25f05f"} Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.488320 5058 generic.go:334] "Generic (PLEG): container finished" podID="fa186350-f3ef-4dc8-a639-f416520c45c7" containerID="49b5f0572d513ec539e2fb4f76017d4d0f405ef94f90169d3a1746e9f4b21893" exitCode=0 Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.488362 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e4f-account-create-rm4rn" event={"ID":"fa186350-f3ef-4dc8-a639-f416520c45c7","Type":"ContainerDied","Data":"49b5f0572d513ec539e2fb4f76017d4d0f405ef94f90169d3a1746e9f4b21893"} Oct 14 09:06:22 crc kubenswrapper[5058]: I1014 09:06:22.488386 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e4f-account-create-rm4rn" event={"ID":"fa186350-f3ef-4dc8-a639-f416520c45c7","Type":"ContainerStarted","Data":"c824935977df8cec5cc653f8ca430a657650e4e0a95b47e0bbf632c5085ded52"} Oct 14 09:06:23 crc kubenswrapper[5058]: I1014 09:06:23.502578 5058 generic.go:334] "Generic (PLEG): container finished" podID="80876351-2288-4166-828c-a950d07d9110" containerID="9aa0ef363939c6cd9a16b3c400615dded93d7ad5300c2d5bb1d63c6ba5e85e1d" exitCode=0 Oct 14 09:06:23 crc kubenswrapper[5058]: I1014 09:06:23.502900 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-2491-account-create-w85vz" event={"ID":"80876351-2288-4166-828c-a950d07d9110","Type":"ContainerDied","Data":"9aa0ef363939c6cd9a16b3c400615dded93d7ad5300c2d5bb1d63c6ba5e85e1d"} Oct 14 09:06:23 crc kubenswrapper[5058]: I1014 09:06:23.506199 5058 generic.go:334] "Generic (PLEG): container finished" podID="973bf351-be16-427c-97f0-058be4442fbb" containerID="366fb880f272a4ff23e8f86a80cad635d74d6f29230260429916eca1b93004a9" exitCode=0 Oct 14 09:06:23 crc kubenswrapper[5058]: I1014 09:06:23.506641 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-2a5a-account-create-rfgzt" event={"ID":"973bf351-be16-427c-97f0-058be4442fbb","Type":"ContainerDied","Data":"366fb880f272a4ff23e8f86a80cad635d74d6f29230260429916eca1b93004a9"} Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.037112 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-441c-account-create-r8sls" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.038255 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c786-account-create-lbwbz" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.094032 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e4f-account-create-rm4rn" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.167282 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvqcl\" (UniqueName: \"kubernetes.io/projected/d1043be2-dee2-4a1e-a0e9-69faae8528a8-kube-api-access-jvqcl\") pod \"d1043be2-dee2-4a1e-a0e9-69faae8528a8\" (UID: \"d1043be2-dee2-4a1e-a0e9-69faae8528a8\") " Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.167461 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbkjd\" (UniqueName: \"kubernetes.io/projected/eb62eadc-7258-4ae7-b4c7-1532222f9626-kube-api-access-hbkjd\") pod \"eb62eadc-7258-4ae7-b4c7-1532222f9626\" (UID: \"eb62eadc-7258-4ae7-b4c7-1532222f9626\") " Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.173438 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb62eadc-7258-4ae7-b4c7-1532222f9626-kube-api-access-hbkjd" (OuterVolumeSpecName: "kube-api-access-hbkjd") pod "eb62eadc-7258-4ae7-b4c7-1532222f9626" (UID: "eb62eadc-7258-4ae7-b4c7-1532222f9626"). InnerVolumeSpecName "kube-api-access-hbkjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.192402 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1043be2-dee2-4a1e-a0e9-69faae8528a8-kube-api-access-jvqcl" (OuterVolumeSpecName: "kube-api-access-jvqcl") pod "d1043be2-dee2-4a1e-a0e9-69faae8528a8" (UID: "d1043be2-dee2-4a1e-a0e9-69faae8528a8"). InnerVolumeSpecName "kube-api-access-jvqcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.270250 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lctfp\" (UniqueName: \"kubernetes.io/projected/fa186350-f3ef-4dc8-a639-f416520c45c7-kube-api-access-lctfp\") pod \"fa186350-f3ef-4dc8-a639-f416520c45c7\" (UID: \"fa186350-f3ef-4dc8-a639-f416520c45c7\") " Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.270694 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvqcl\" (UniqueName: \"kubernetes.io/projected/d1043be2-dee2-4a1e-a0e9-69faae8528a8-kube-api-access-jvqcl\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.270711 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbkjd\" (UniqueName: \"kubernetes.io/projected/eb62eadc-7258-4ae7-b4c7-1532222f9626-kube-api-access-hbkjd\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.273303 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa186350-f3ef-4dc8-a639-f416520c45c7-kube-api-access-lctfp" (OuterVolumeSpecName: "kube-api-access-lctfp") pod "fa186350-f3ef-4dc8-a639-f416520c45c7" (UID: "fa186350-f3ef-4dc8-a639-f416520c45c7"). InnerVolumeSpecName "kube-api-access-lctfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.373268 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lctfp\" (UniqueName: \"kubernetes.io/projected/fa186350-f3ef-4dc8-a639-f416520c45c7-kube-api-access-lctfp\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.521491 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-441c-account-create-r8sls" event={"ID":"eb62eadc-7258-4ae7-b4c7-1532222f9626","Type":"ContainerDied","Data":"9c13627cb2423c4eddc2d012ea158f247794db147b31f286f6bac8f26aa15d2f"} Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.521570 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c13627cb2423c4eddc2d012ea158f247794db147b31f286f6bac8f26aa15d2f" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.521517 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-441c-account-create-r8sls" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.527546 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c786-account-create-lbwbz" event={"ID":"d1043be2-dee2-4a1e-a0e9-69faae8528a8","Type":"ContainerDied","Data":"833b4e2d4688c19c80a1f3cfbd69829196b019046a852c2ce66e1dec57394657"} Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.527625 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="833b4e2d4688c19c80a1f3cfbd69829196b019046a852c2ce66e1dec57394657" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.527562 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c786-account-create-lbwbz" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.530357 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e4f-account-create-rm4rn" event={"ID":"fa186350-f3ef-4dc8-a639-f416520c45c7","Type":"ContainerDied","Data":"c824935977df8cec5cc653f8ca430a657650e4e0a95b47e0bbf632c5085ded52"} Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.530408 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c824935977df8cec5cc653f8ca430a657650e4e0a95b47e0bbf632c5085ded52" Oct 14 09:06:24 crc kubenswrapper[5058]: I1014 09:06:24.530616 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e4f-account-create-rm4rn" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.013718 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-2491-account-create-w85vz" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.020603 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-2a5a-account-create-rfgzt" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.190931 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5m5v\" (UniqueName: \"kubernetes.io/projected/80876351-2288-4166-828c-a950d07d9110-kube-api-access-g5m5v\") pod \"80876351-2288-4166-828c-a950d07d9110\" (UID: \"80876351-2288-4166-828c-a950d07d9110\") " Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.191021 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fhm4\" (UniqueName: \"kubernetes.io/projected/973bf351-be16-427c-97f0-058be4442fbb-kube-api-access-2fhm4\") pod \"973bf351-be16-427c-97f0-058be4442fbb\" (UID: \"973bf351-be16-427c-97f0-058be4442fbb\") " Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.196139 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80876351-2288-4166-828c-a950d07d9110-kube-api-access-g5m5v" (OuterVolumeSpecName: "kube-api-access-g5m5v") pod "80876351-2288-4166-828c-a950d07d9110" (UID: "80876351-2288-4166-828c-a950d07d9110"). InnerVolumeSpecName "kube-api-access-g5m5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.196450 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973bf351-be16-427c-97f0-058be4442fbb-kube-api-access-2fhm4" (OuterVolumeSpecName: "kube-api-access-2fhm4") pod "973bf351-be16-427c-97f0-058be4442fbb" (UID: "973bf351-be16-427c-97f0-058be4442fbb"). InnerVolumeSpecName "kube-api-access-2fhm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.293209 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5m5v\" (UniqueName: \"kubernetes.io/projected/80876351-2288-4166-828c-a950d07d9110-kube-api-access-g5m5v\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.293261 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fhm4\" (UniqueName: \"kubernetes.io/projected/973bf351-be16-427c-97f0-058be4442fbb-kube-api-access-2fhm4\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.555895 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-2491-account-create-w85vz" event={"ID":"80876351-2288-4166-828c-a950d07d9110","Type":"ContainerDied","Data":"1f90b180050ee18a552736123ec36e944dc5011f8927fedc1d278cc8e482c92a"} Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.556156 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f90b180050ee18a552736123ec36e944dc5011f8927fedc1d278cc8e482c92a" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.555959 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-2491-account-create-w85vz" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.557697 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-2a5a-account-create-rfgzt" event={"ID":"973bf351-be16-427c-97f0-058be4442fbb","Type":"ContainerDied","Data":"1f5a05e3d0bca129d43bf9925501528b50c14fea1e1d39f7e9b34a78de25f05f"} Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.557745 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f5a05e3d0bca129d43bf9925501528b50c14fea1e1d39f7e9b34a78de25f05f" Oct 14 09:06:25 crc kubenswrapper[5058]: I1014 09:06:25.557906 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-2a5a-account-create-rfgzt" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.613675 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2sg4"] Oct 14 09:06:26 crc kubenswrapper[5058]: E1014 09:06:26.622833 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb62eadc-7258-4ae7-b4c7-1532222f9626" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.623260 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb62eadc-7258-4ae7-b4c7-1532222f9626" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: E1014 09:06:26.623364 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80876351-2288-4166-828c-a950d07d9110" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.623434 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="80876351-2288-4166-828c-a950d07d9110" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: E1014 09:06:26.623542 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa186350-f3ef-4dc8-a639-f416520c45c7" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.623610 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa186350-f3ef-4dc8-a639-f416520c45c7" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: E1014 09:06:26.623672 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1043be2-dee2-4a1e-a0e9-69faae8528a8" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.623721 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1043be2-dee2-4a1e-a0e9-69faae8528a8" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: E1014 09:06:26.623790 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973bf351-be16-427c-97f0-058be4442fbb" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.623856 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="973bf351-be16-427c-97f0-058be4442fbb" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.624296 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="80876351-2288-4166-828c-a950d07d9110" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.624372 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb62eadc-7258-4ae7-b4c7-1532222f9626" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.624429 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="973bf351-be16-427c-97f0-058be4442fbb" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.624481 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa186350-f3ef-4dc8-a639-f416520c45c7" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.624536 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1043be2-dee2-4a1e-a0e9-69faae8528a8" containerName="mariadb-account-create" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.625490 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.630722 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.631015 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.631366 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-n8vnr" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.643735 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2sg4"] Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.721741 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjzh\" (UniqueName: \"kubernetes.io/projected/634840fa-f062-41bf-95e9-6e6171820f13-kube-api-access-htjzh\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.721839 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.721907 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-scripts\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.722062 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-config-data\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.824334 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-scripts\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.824838 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-config-data\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.825246 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htjzh\" (UniqueName: \"kubernetes.io/projected/634840fa-f062-41bf-95e9-6e6171820f13-kube-api-access-htjzh\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.825544 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.830765 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-scripts\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.838695 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-config-data\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.839368 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.858848 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htjzh\" (UniqueName: \"kubernetes.io/projected/634840fa-f062-41bf-95e9-6e6171820f13-kube-api-access-htjzh\") pod \"nova-cell0-conductor-db-sync-l2sg4\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:26 crc kubenswrapper[5058]: I1014 09:06:26.963603 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:27 crc kubenswrapper[5058]: I1014 09:06:27.482153 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2sg4"] Oct 14 09:06:27 crc kubenswrapper[5058]: W1014 09:06:27.485923 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod634840fa_f062_41bf_95e9_6e6171820f13.slice/crio-9b4ea288eb7272a7f07de38776fbb5d4a716aa40350fed7ac45abd5755aabd54 WatchSource:0}: Error finding container 9b4ea288eb7272a7f07de38776fbb5d4a716aa40350fed7ac45abd5755aabd54: Status 404 returned error can't find the container with id 9b4ea288eb7272a7f07de38776fbb5d4a716aa40350fed7ac45abd5755aabd54 Oct 14 09:06:27 crc kubenswrapper[5058]: I1014 09:06:27.575218 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2sg4" event={"ID":"634840fa-f062-41bf-95e9-6e6171820f13","Type":"ContainerStarted","Data":"9b4ea288eb7272a7f07de38776fbb5d4a716aa40350fed7ac45abd5755aabd54"} Oct 14 09:06:37 crc kubenswrapper[5058]: I1014 09:06:37.690204 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2sg4" event={"ID":"634840fa-f062-41bf-95e9-6e6171820f13","Type":"ContainerStarted","Data":"37343111f9a912598fd3f4fcc12b6a56d84e20cfb904aa8432406a9766f03172"} Oct 14 09:06:37 crc kubenswrapper[5058]: I1014 09:06:37.715123 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-l2sg4" podStartSLOduration=1.804660829 podStartE2EDuration="11.715103642s" podCreationTimestamp="2025-10-14 09:06:26 +0000 UTC" firstStartedPulling="2025-10-14 09:06:27.487918023 +0000 UTC m=+8335.399001829" lastFinishedPulling="2025-10-14 09:06:37.398360836 +0000 UTC m=+8345.309444642" observedRunningTime="2025-10-14 09:06:37.710909392 +0000 UTC m=+8345.621993258" watchObservedRunningTime="2025-10-14 09:06:37.715103642 +0000 UTC m=+8345.626187448" Oct 14 09:06:42 crc kubenswrapper[5058]: I1014 09:06:42.768288 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2sg4" event={"ID":"634840fa-f062-41bf-95e9-6e6171820f13","Type":"ContainerDied","Data":"37343111f9a912598fd3f4fcc12b6a56d84e20cfb904aa8432406a9766f03172"} Oct 14 09:06:42 crc kubenswrapper[5058]: I1014 09:06:42.768268 5058 generic.go:334] "Generic (PLEG): container finished" podID="634840fa-f062-41bf-95e9-6e6171820f13" containerID="37343111f9a912598fd3f4fcc12b6a56d84e20cfb904aa8432406a9766f03172" exitCode=0 Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.207636 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.306120 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htjzh\" (UniqueName: \"kubernetes.io/projected/634840fa-f062-41bf-95e9-6e6171820f13-kube-api-access-htjzh\") pod \"634840fa-f062-41bf-95e9-6e6171820f13\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.306210 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-combined-ca-bundle\") pod \"634840fa-f062-41bf-95e9-6e6171820f13\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.306374 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-config-data\") pod \"634840fa-f062-41bf-95e9-6e6171820f13\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.306523 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-scripts\") pod \"634840fa-f062-41bf-95e9-6e6171820f13\" (UID: \"634840fa-f062-41bf-95e9-6e6171820f13\") " Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.312125 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-scripts" (OuterVolumeSpecName: "scripts") pod "634840fa-f062-41bf-95e9-6e6171820f13" (UID: "634840fa-f062-41bf-95e9-6e6171820f13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.313154 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634840fa-f062-41bf-95e9-6e6171820f13-kube-api-access-htjzh" (OuterVolumeSpecName: "kube-api-access-htjzh") pod "634840fa-f062-41bf-95e9-6e6171820f13" (UID: "634840fa-f062-41bf-95e9-6e6171820f13"). InnerVolumeSpecName "kube-api-access-htjzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.330540 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-config-data" (OuterVolumeSpecName: "config-data") pod "634840fa-f062-41bf-95e9-6e6171820f13" (UID: "634840fa-f062-41bf-95e9-6e6171820f13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.343231 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "634840fa-f062-41bf-95e9-6e6171820f13" (UID: "634840fa-f062-41bf-95e9-6e6171820f13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.409050 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htjzh\" (UniqueName: \"kubernetes.io/projected/634840fa-f062-41bf-95e9-6e6171820f13-kube-api-access-htjzh\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.409112 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.409129 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.409146 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/634840fa-f062-41bf-95e9-6e6171820f13-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.796054 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2sg4" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.816010 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2sg4" event={"ID":"634840fa-f062-41bf-95e9-6e6171820f13","Type":"ContainerDied","Data":"9b4ea288eb7272a7f07de38776fbb5d4a716aa40350fed7ac45abd5755aabd54"} Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.816090 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b4ea288eb7272a7f07de38776fbb5d4a716aa40350fed7ac45abd5755aabd54" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.914974 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 09:06:44 crc kubenswrapper[5058]: E1014 09:06:44.915733 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634840fa-f062-41bf-95e9-6e6171820f13" containerName="nova-cell0-conductor-db-sync" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.915780 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="634840fa-f062-41bf-95e9-6e6171820f13" containerName="nova-cell0-conductor-db-sync" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.916257 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="634840fa-f062-41bf-95e9-6e6171820f13" containerName="nova-cell0-conductor-db-sync" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.917656 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.923399 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.923886 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-n8vnr" Oct 14 09:06:44 crc kubenswrapper[5058]: I1014 09:06:44.938363 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.020161 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkdgd\" (UniqueName: \"kubernetes.io/projected/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-kube-api-access-dkdgd\") pod \"nova-cell0-conductor-0\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.020512 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.020666 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.122184 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.122273 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.122340 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkdgd\" (UniqueName: \"kubernetes.io/projected/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-kube-api-access-dkdgd\") pod \"nova-cell0-conductor-0\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.139564 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.142946 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.144070 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkdgd\" (UniqueName: \"kubernetes.io/projected/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-kube-api-access-dkdgd\") pod \"nova-cell0-conductor-0\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.269283 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.775657 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 09:06:45 crc kubenswrapper[5058]: W1014 09:06:45.782747 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f0dc5f2_ff52_4c09_97f7_be156e1d0847.slice/crio-90911c806300bc035fef057ce93b59d5b38c6cbc86435a3eae5be886133b264d WatchSource:0}: Error finding container 90911c806300bc035fef057ce93b59d5b38c6cbc86435a3eae5be886133b264d: Status 404 returned error can't find the container with id 90911c806300bc035fef057ce93b59d5b38c6cbc86435a3eae5be886133b264d Oct 14 09:06:45 crc kubenswrapper[5058]: I1014 09:06:45.810670 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f0dc5f2-ff52-4c09-97f7-be156e1d0847","Type":"ContainerStarted","Data":"90911c806300bc035fef057ce93b59d5b38c6cbc86435a3eae5be886133b264d"} Oct 14 09:06:46 crc kubenswrapper[5058]: I1014 09:06:46.843275 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f0dc5f2-ff52-4c09-97f7-be156e1d0847","Type":"ContainerStarted","Data":"04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55"} Oct 14 09:06:46 crc kubenswrapper[5058]: I1014 09:06:46.843697 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:46 crc kubenswrapper[5058]: I1014 09:06:46.871863 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.871835377 podStartE2EDuration="2.871835377s" podCreationTimestamp="2025-10-14 09:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:06:46.85794633 +0000 UTC m=+8354.769030146" watchObservedRunningTime="2025-10-14 09:06:46.871835377 +0000 UTC m=+8354.782919213" Oct 14 09:06:50 crc kubenswrapper[5058]: I1014 09:06:50.321891 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 14 09:06:50 crc kubenswrapper[5058]: I1014 09:06:50.971906 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-h584n"] Oct 14 09:06:50 crc kubenswrapper[5058]: I1014 09:06:50.988657 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.005502 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.005744 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.019658 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h584n"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.138977 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.140481 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.142006 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.143697 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.143837 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnxf\" (UniqueName: \"kubernetes.io/projected/ed21e369-4fc4-44d6-a38a-1740d99705bf-kube-api-access-9wnxf\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.143910 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-scripts\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.144014 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-config-data\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.153386 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.182859 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.185375 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.189474 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.197816 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.247743 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.247808 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c32740-d479-4c2b-91ad-60bd6955ce11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6c32740-d479-4c2b-91ad-60bd6955ce11\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.247862 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c32740-d479-4c2b-91ad-60bd6955ce11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6c32740-d479-4c2b-91ad-60bd6955ce11\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.247892 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnxf\" (UniqueName: \"kubernetes.io/projected/ed21e369-4fc4-44d6-a38a-1740d99705bf-kube-api-access-9wnxf\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.247917 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-scripts\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.247965 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-config-data\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.248038 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn7gm\" (UniqueName: \"kubernetes.io/projected/c6c32740-d479-4c2b-91ad-60bd6955ce11-kube-api-access-rn7gm\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6c32740-d479-4c2b-91ad-60bd6955ce11\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.253701 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-scripts\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.270029 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-config-data\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.270688 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.271769 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnxf\" (UniqueName: \"kubernetes.io/projected/ed21e369-4fc4-44d6-a38a-1740d99705bf-kube-api-access-9wnxf\") pod \"nova-cell0-cell-mapping-h584n\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.304553 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.306937 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.319067 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.336643 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.342768 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.344411 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.351154 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.353967 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.354019 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn7gm\" (UniqueName: \"kubernetes.io/projected/c6c32740-d479-4c2b-91ad-60bd6955ce11-kube-api-access-rn7gm\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6c32740-d479-4c2b-91ad-60bd6955ce11\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.354052 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c32740-d479-4c2b-91ad-60bd6955ce11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6c32740-d479-4c2b-91ad-60bd6955ce11\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.354128 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c32740-d479-4c2b-91ad-60bd6955ce11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6c32740-d479-4c2b-91ad-60bd6955ce11\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.354198 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-config-data\") pod \"nova-scheduler-0\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.354227 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jpj\" (UniqueName: \"kubernetes.io/projected/72c151ec-f2c9-4521-9b3f-3082653b874e-kube-api-access-v7jpj\") pod \"nova-scheduler-0\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.402727 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.430498 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c32740-d479-4c2b-91ad-60bd6955ce11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6c32740-d479-4c2b-91ad-60bd6955ce11\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.431685 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn7gm\" (UniqueName: \"kubernetes.io/projected/c6c32740-d479-4c2b-91ad-60bd6955ce11-kube-api-access-rn7gm\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6c32740-d479-4c2b-91ad-60bd6955ce11\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.436448 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c32740-d479-4c2b-91ad-60bd6955ce11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6c32740-d479-4c2b-91ad-60bd6955ce11\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.461465 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.462227 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.463355 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.463396 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.463424 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjmsv\" (UniqueName: \"kubernetes.io/projected/fd743f1a-d12a-426c-9603-86c7c8092a3b-kube-api-access-vjmsv\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.463452 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-config-data\") pod \"nova-scheduler-0\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.463479 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd743f1a-d12a-426c-9603-86c7c8092a3b-logs\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.463507 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7jpj\" (UniqueName: \"kubernetes.io/projected/72c151ec-f2c9-4521-9b3f-3082653b874e-kube-api-access-v7jpj\") pod \"nova-scheduler-0\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.464692 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-config-data\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.464804 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.464873 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c815dd4e-606d-4637-aca7-117d577c6c74-logs\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.464949 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-config-data\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.465002 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45j8\" (UniqueName: \"kubernetes.io/projected/c815dd4e-606d-4637-aca7-117d577c6c74-kube-api-access-l45j8\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.472014 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-config-data\") pod \"nova-scheduler-0\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.501244 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.504515 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7jpj\" (UniqueName: \"kubernetes.io/projected/72c151ec-f2c9-4521-9b3f-3082653b874e-kube-api-access-v7jpj\") pod \"nova-scheduler-0\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.506145 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.519963 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell2-novncproxy-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.521100 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.525174 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-novncproxy-config-data" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.566460 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c815dd4e-606d-4637-aca7-117d577c6c74-logs\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.566526 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-config-data\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.566557 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l45j8\" (UniqueName: \"kubernetes.io/projected/c815dd4e-606d-4637-aca7-117d577c6c74-kube-api-access-l45j8\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.566592 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.566612 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.566628 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjmsv\" (UniqueName: \"kubernetes.io/projected/fd743f1a-d12a-426c-9603-86c7c8092a3b-kube-api-access-vjmsv\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.566649 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd743f1a-d12a-426c-9603-86c7c8092a3b-logs\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.566671 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-config-data\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.570101 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-config-data\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.571697 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c815dd4e-606d-4637-aca7-117d577c6c74-logs\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.573762 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.573988 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-novncproxy-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.574449 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd743f1a-d12a-426c-9603-86c7c8092a3b-logs\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.574887 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-config-data\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.574906 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.589524 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjmsv\" (UniqueName: \"kubernetes.io/projected/fd743f1a-d12a-426c-9603-86c7c8092a3b-kube-api-access-vjmsv\") pod \"nova-api-0\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.591099 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l45j8\" (UniqueName: \"kubernetes.io/projected/c815dd4e-606d-4637-aca7-117d577c6c74-kube-api-access-l45j8\") pod \"nova-metadata-0\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.596983 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c68978599-5w4l9"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.598724 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.607527 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c68978599-5w4l9"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.622507 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell3-novncproxy-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.624280 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.628448 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell3-novncproxy-config-data" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.635419 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-novncproxy-0"] Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.691882 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25111cea-43f8-465b-a3f4-33933be1b593-combined-ca-bundle\") pod \"nova-cell2-novncproxy-0\" (UID: \"25111cea-43f8-465b-a3f4-33933be1b593\") " pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.691985 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qsc8\" (UniqueName: \"kubernetes.io/projected/25111cea-43f8-465b-a3f4-33933be1b593-kube-api-access-8qsc8\") pod \"nova-cell2-novncproxy-0\" (UID: \"25111cea-43f8-465b-a3f4-33933be1b593\") " pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.692020 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25111cea-43f8-465b-a3f4-33933be1b593-config-data\") pod \"nova-cell2-novncproxy-0\" (UID: \"25111cea-43f8-465b-a3f4-33933be1b593\") " pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.793863 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69hkx\" (UniqueName: \"kubernetes.io/projected/3e9a43ee-d5b2-4abd-9daf-e37c88284049-kube-api-access-69hkx\") pod \"nova-cell3-novncproxy-0\" (UID: \"3e9a43ee-d5b2-4abd-9daf-e37c88284049\") " pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.793915 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25111cea-43f8-465b-a3f4-33933be1b593-combined-ca-bundle\") pod \"nova-cell2-novncproxy-0\" (UID: \"25111cea-43f8-465b-a3f4-33933be1b593\") " pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.794033 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9a43ee-d5b2-4abd-9daf-e37c88284049-config-data\") pod \"nova-cell3-novncproxy-0\" (UID: \"3e9a43ee-d5b2-4abd-9daf-e37c88284049\") " pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.794069 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qsc8\" (UniqueName: \"kubernetes.io/projected/25111cea-43f8-465b-a3f4-33933be1b593-kube-api-access-8qsc8\") pod \"nova-cell2-novncproxy-0\" (UID: \"25111cea-43f8-465b-a3f4-33933be1b593\") " pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.794107 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-dns-svc\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.794139 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25111cea-43f8-465b-a3f4-33933be1b593-config-data\") pod \"nova-cell2-novncproxy-0\" (UID: \"25111cea-43f8-465b-a3f4-33933be1b593\") " pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.794156 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-config\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.794305 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9a43ee-d5b2-4abd-9daf-e37c88284049-combined-ca-bundle\") pod \"nova-cell3-novncproxy-0\" (UID: \"3e9a43ee-d5b2-4abd-9daf-e37c88284049\") " pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.794415 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbvr\" (UniqueName: \"kubernetes.io/projected/deaaf634-252e-4273-be94-541559e377f7-kube-api-access-tcbvr\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.794441 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-sb\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.794487 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-nb\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.798016 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25111cea-43f8-465b-a3f4-33933be1b593-config-data\") pod \"nova-cell2-novncproxy-0\" (UID: \"25111cea-43f8-465b-a3f4-33933be1b593\") " pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.799008 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25111cea-43f8-465b-a3f4-33933be1b593-combined-ca-bundle\") pod \"nova-cell2-novncproxy-0\" (UID: \"25111cea-43f8-465b-a3f4-33933be1b593\") " pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.812191 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qsc8\" (UniqueName: \"kubernetes.io/projected/25111cea-43f8-465b-a3f4-33933be1b593-kube-api-access-8qsc8\") pod \"nova-cell2-novncproxy-0\" (UID: \"25111cea-43f8-465b-a3f4-33933be1b593\") " pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.843274 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.875184 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.884554 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.896326 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-dns-svc\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.896834 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-config\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.896924 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9a43ee-d5b2-4abd-9daf-e37c88284049-combined-ca-bundle\") pod \"nova-cell3-novncproxy-0\" (UID: \"3e9a43ee-d5b2-4abd-9daf-e37c88284049\") " pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.896995 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcbvr\" (UniqueName: \"kubernetes.io/projected/deaaf634-252e-4273-be94-541559e377f7-kube-api-access-tcbvr\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.897031 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-sb\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.897070 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-nb\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.897155 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69hkx\" (UniqueName: \"kubernetes.io/projected/3e9a43ee-d5b2-4abd-9daf-e37c88284049-kube-api-access-69hkx\") pod \"nova-cell3-novncproxy-0\" (UID: \"3e9a43ee-d5b2-4abd-9daf-e37c88284049\") " pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.897282 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9a43ee-d5b2-4abd-9daf-e37c88284049-config-data\") pod \"nova-cell3-novncproxy-0\" (UID: \"3e9a43ee-d5b2-4abd-9daf-e37c88284049\") " pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.897581 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-config\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.898613 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-nb\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.898768 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-sb\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.901072 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-dns-svc\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.911296 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9a43ee-d5b2-4abd-9daf-e37c88284049-combined-ca-bundle\") pod \"nova-cell3-novncproxy-0\" (UID: \"3e9a43ee-d5b2-4abd-9daf-e37c88284049\") " pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.917130 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9a43ee-d5b2-4abd-9daf-e37c88284049-config-data\") pod \"nova-cell3-novncproxy-0\" (UID: \"3e9a43ee-d5b2-4abd-9daf-e37c88284049\") " pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.918991 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcbvr\" (UniqueName: \"kubernetes.io/projected/deaaf634-252e-4273-be94-541559e377f7-kube-api-access-tcbvr\") pod \"dnsmasq-dns-6c68978599-5w4l9\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.921261 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69hkx\" (UniqueName: \"kubernetes.io/projected/3e9a43ee-d5b2-4abd-9daf-e37c88284049-kube-api-access-69hkx\") pod \"nova-cell3-novncproxy-0\" (UID: \"3e9a43ee-d5b2-4abd-9daf-e37c88284049\") " pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.942534 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:51 crc kubenswrapper[5058]: I1014 09:06:51.952949 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:51.996576 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h584n"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.113465 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.164845 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:06:52 crc kubenswrapper[5058]: W1014 09:06:52.169913 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72c151ec_f2c9_4521_9b3f_3082653b874e.slice/crio-53f8908a3cb7bee929e2c636f7c850078298ab6b4aba651cb847998f3ca30bd6 WatchSource:0}: Error finding container 53f8908a3cb7bee929e2c636f7c850078298ab6b4aba651cb847998f3ca30bd6: Status 404 returned error can't find the container with id 53f8908a3cb7bee929e2c636f7c850078298ab6b4aba651cb847998f3ca30bd6 Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.372780 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.446402 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-55r6w"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.447703 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.450345 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.450615 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.455419 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-55r6w"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.509741 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-novncproxy-0"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.540080 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell2-conductor-db-sync-7hvm2"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.541490 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.552197 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-conductor-config-data" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.552399 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-conductor-scripts" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.575082 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-conductor-db-sync-7hvm2"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.632551 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6g6s\" (UniqueName: \"kubernetes.io/projected/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-kube-api-access-l6g6s\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.632705 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.632742 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-scripts\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.632828 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-config-data\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.641877 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell3-conductor-db-sync-rmkxf"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.643446 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.647258 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell3-conductor-scripts" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.663519 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell3-conductor-config-data" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.664750 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-conductor-db-sync-rmkxf"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.695056 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-novncproxy-0"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.706006 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c68978599-5w4l9"] Oct 14 09:06:52 crc kubenswrapper[5058]: W1014 09:06:52.747684 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeaaf634_252e_4273_be94_541559e377f7.slice/crio-3ee08ed3a44b19c2de13b49ba56b6084bc1b6acea879e54f6cf977f0e2017c9b WatchSource:0}: Error finding container 3ee08ed3a44b19c2de13b49ba56b6084bc1b6acea879e54f6cf977f0e2017c9b: Status 404 returned error can't find the container with id 3ee08ed3a44b19c2de13b49ba56b6084bc1b6acea879e54f6cf977f0e2017c9b Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.756690 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-config-data\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.756834 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-config-data\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.756893 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqg8t\" (UniqueName: \"kubernetes.io/projected/dfebb4c9-3b48-4463-8e95-d6b68646ba51-kube-api-access-tqg8t\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.756931 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6g6s\" (UniqueName: \"kubernetes.io/projected/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-kube-api-access-l6g6s\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.756975 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-combined-ca-bundle\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.757095 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-scripts\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.757233 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrd27\" (UniqueName: \"kubernetes.io/projected/0fa59b2f-7cff-4d52-826e-eda45ae51a10-kube-api-access-qrd27\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.757458 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-combined-ca-bundle\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.757494 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.757921 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-scripts\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.757944 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-config-data\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.757961 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-scripts\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: W1014 09:06:52.760233 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e9a43ee_d5b2_4abd_9daf_e37c88284049.slice/crio-284ac3e34eb7a63d6603f693b5ae973c8b69d1feb00fbd7df75f92fa8fdc4320 WatchSource:0}: Error finding container 284ac3e34eb7a63d6603f693b5ae973c8b69d1feb00fbd7df75f92fa8fdc4320: Status 404 returned error can't find the container with id 284ac3e34eb7a63d6603f693b5ae973c8b69d1feb00fbd7df75f92fa8fdc4320 Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.776230 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-scripts\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.776755 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.778478 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.779884 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-config-data\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.788629 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6g6s\" (UniqueName: \"kubernetes.io/projected/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-kube-api-access-l6g6s\") pod \"nova-cell1-conductor-db-sync-55r6w\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.863291 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-config-data\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.863735 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqg8t\" (UniqueName: \"kubernetes.io/projected/dfebb4c9-3b48-4463-8e95-d6b68646ba51-kube-api-access-tqg8t\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.863881 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-combined-ca-bundle\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.864045 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-scripts\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.864262 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrd27\" (UniqueName: \"kubernetes.io/projected/0fa59b2f-7cff-4d52-826e-eda45ae51a10-kube-api-access-qrd27\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.864357 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-combined-ca-bundle\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.864448 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-scripts\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.864516 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-config-data\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.868068 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-config-data\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.869162 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-scripts\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.872211 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-combined-ca-bundle\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.873091 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-config-data\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.874995 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-scripts\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.878260 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-combined-ca-bundle\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.882416 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqg8t\" (UniqueName: \"kubernetes.io/projected/dfebb4c9-3b48-4463-8e95-d6b68646ba51-kube-api-access-tqg8t\") pod \"nova-cell3-conductor-db-sync-rmkxf\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.885108 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrd27\" (UniqueName: \"kubernetes.io/projected/0fa59b2f-7cff-4d52-826e-eda45ae51a10-kube-api-access-qrd27\") pod \"nova-cell2-conductor-db-sync-7hvm2\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.918551 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd743f1a-d12a-426c-9603-86c7c8092a3b","Type":"ContainerStarted","Data":"69ca8dfcd42b35e444c54f39f2889c2d1e2c873b7b2a115f0b2f8fba4a8844ba"} Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.923673 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-novncproxy-0" event={"ID":"25111cea-43f8-465b-a3f4-33933be1b593","Type":"ContainerStarted","Data":"a68feb08d012422b714fa8bb8fdca8cc95babf0f8ee90b3d2a33c7c9fd2bcd80"} Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.926231 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"72c151ec-f2c9-4521-9b3f-3082653b874e","Type":"ContainerStarted","Data":"53f8908a3cb7bee929e2c636f7c850078298ab6b4aba651cb847998f3ca30bd6"} Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.927696 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" event={"ID":"deaaf634-252e-4273-be94-541559e377f7","Type":"ContainerStarted","Data":"3ee08ed3a44b19c2de13b49ba56b6084bc1b6acea879e54f6cf977f0e2017c9b"} Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.929897 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c815dd4e-606d-4637-aca7-117d577c6c74","Type":"ContainerStarted","Data":"727ed923c45f90d5d077bf54afae9301fa8e14894044bec84cf9b206ce0b1f3f"} Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.932941 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-novncproxy-0" event={"ID":"3e9a43ee-d5b2-4abd-9daf-e37c88284049","Type":"ContainerStarted","Data":"284ac3e34eb7a63d6603f693b5ae973c8b69d1feb00fbd7df75f92fa8fdc4320"} Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.942249 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h584n" event={"ID":"ed21e369-4fc4-44d6-a38a-1740d99705bf","Type":"ContainerStarted","Data":"6103137bfa1c07719266ed40132a241f1f122352577df59ec1353520b3bfa564"} Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.942557 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h584n" event={"ID":"ed21e369-4fc4-44d6-a38a-1740d99705bf","Type":"ContainerStarted","Data":"d8f523441e7a8080a2b2c32db74e40be98edd88759c34451d72ae4c0fe8a4fb7"} Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.944182 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c6c32740-d479-4c2b-91ad-60bd6955ce11","Type":"ContainerStarted","Data":"755fa2eeb9eafb150e3b8d73f5200875529e2e6fb619912288a96ab455ab6dae"} Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.979913 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:52 crc kubenswrapper[5058]: I1014 09:06:52.990486 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-h584n" podStartSLOduration=2.990469527 podStartE2EDuration="2.990469527s" podCreationTimestamp="2025-10-14 09:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:06:52.984766624 +0000 UTC m=+8360.895850440" watchObservedRunningTime="2025-10-14 09:06:52.990469527 +0000 UTC m=+8360.901553333" Oct 14 09:06:53 crc kubenswrapper[5058]: I1014 09:06:53.076864 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:06:53 crc kubenswrapper[5058]: I1014 09:06:53.176272 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:06:53 crc kubenswrapper[5058]: I1014 09:06:53.467210 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-conductor-db-sync-rmkxf"] Oct 14 09:06:53 crc kubenswrapper[5058]: I1014 09:06:53.641483 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-55r6w"] Oct 14 09:06:53 crc kubenswrapper[5058]: I1014 09:06:53.769006 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-conductor-db-sync-7hvm2"] Oct 14 09:06:53 crc kubenswrapper[5058]: I1014 09:06:53.957436 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-55r6w" event={"ID":"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7","Type":"ContainerStarted","Data":"c63ed0f1b04fabc42bdab392b76e92e0e2cbdb9f95766e774f9d9f46abcd9d64"} Oct 14 09:06:53 crc kubenswrapper[5058]: I1014 09:06:53.959241 5058 generic.go:334] "Generic (PLEG): container finished" podID="deaaf634-252e-4273-be94-541559e377f7" containerID="fb05a9923ad1ca52797b27c80209f83ea1eaf6f1564ee5f11aad423148988171" exitCode=0 Oct 14 09:06:53 crc kubenswrapper[5058]: I1014 09:06:53.959291 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" event={"ID":"deaaf634-252e-4273-be94-541559e377f7","Type":"ContainerDied","Data":"fb05a9923ad1ca52797b27c80209f83ea1eaf6f1564ee5f11aad423148988171"} Oct 14 09:06:53 crc kubenswrapper[5058]: I1014 09:06:53.961323 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-db-sync-rmkxf" event={"ID":"dfebb4c9-3b48-4463-8e95-d6b68646ba51","Type":"ContainerStarted","Data":"e13089a082d2cbd3affa03a2e4b09ab9adb3971bd408648b7ed1a02a78531060"} Oct 14 09:06:54 crc kubenswrapper[5058]: W1014 09:06:54.104521 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fa59b2f_7cff_4d52_826e_eda45ae51a10.slice/crio-b2e02ee5f8f8ceea6dee782a67bfc196552ce6402b7076d7ee19be7016e8d7f0 WatchSource:0}: Error finding container b2e02ee5f8f8ceea6dee782a67bfc196552ce6402b7076d7ee19be7016e8d7f0: Status 404 returned error can't find the container with id b2e02ee5f8f8ceea6dee782a67bfc196552ce6402b7076d7ee19be7016e8d7f0 Oct 14 09:06:54 crc kubenswrapper[5058]: I1014 09:06:54.975549 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-db-sync-rmkxf" event={"ID":"dfebb4c9-3b48-4463-8e95-d6b68646ba51","Type":"ContainerStarted","Data":"6ea0c5e0ab4bfd6eb35787bddd607712b68538cc9fade45b0ba5f50827d3c3c0"} Oct 14 09:06:54 crc kubenswrapper[5058]: I1014 09:06:54.977017 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-db-sync-7hvm2" event={"ID":"0fa59b2f-7cff-4d52-826e-eda45ae51a10","Type":"ContainerStarted","Data":"b2e02ee5f8f8ceea6dee782a67bfc196552ce6402b7076d7ee19be7016e8d7f0"} Oct 14 09:06:54 crc kubenswrapper[5058]: I1014 09:06:54.995831 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell3-conductor-db-sync-rmkxf" podStartSLOduration=2.995809603 podStartE2EDuration="2.995809603s" podCreationTimestamp="2025-10-14 09:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:06:54.989681268 +0000 UTC m=+8362.900765084" watchObservedRunningTime="2025-10-14 09:06:54.995809603 +0000 UTC m=+8362.906893409" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.002392 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-db-sync-7hvm2" event={"ID":"0fa59b2f-7cff-4d52-826e-eda45ae51a10","Type":"ContainerStarted","Data":"c0ae6a2c000919076b9931d9f0747226e73fe7ed486da06a47ca43204b52b59f"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.006129 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" event={"ID":"deaaf634-252e-4273-be94-541559e377f7","Type":"ContainerStarted","Data":"259a8210a3ea80b0d226e3b994fafb64254fe2072b053941265ece8c91cd46a3"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.007099 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.009441 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c815dd4e-606d-4637-aca7-117d577c6c74","Type":"ContainerStarted","Data":"9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.009465 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c815dd4e-606d-4637-aca7-117d577c6c74","Type":"ContainerStarted","Data":"eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.011645 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd743f1a-d12a-426c-9603-86c7c8092a3b","Type":"ContainerStarted","Data":"d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.011667 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd743f1a-d12a-426c-9603-86c7c8092a3b","Type":"ContainerStarted","Data":"c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.013816 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-novncproxy-0" event={"ID":"25111cea-43f8-465b-a3f4-33933be1b593","Type":"ContainerStarted","Data":"8bab09242b06dbfe77c00489fa1d18bf448fea538e9b6aba04076888a9c01315"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.016046 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"72c151ec-f2c9-4521-9b3f-3082653b874e","Type":"ContainerStarted","Data":"eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.020062 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell2-conductor-db-sync-7hvm2" podStartSLOduration=5.020050019 podStartE2EDuration="5.020050019s" podCreationTimestamp="2025-10-14 09:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:06:57.019333248 +0000 UTC m=+8364.930417054" watchObservedRunningTime="2025-10-14 09:06:57.020050019 +0000 UTC m=+8364.931133825" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.021428 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-novncproxy-0" event={"ID":"3e9a43ee-d5b2-4abd-9daf-e37c88284049","Type":"ContainerStarted","Data":"e18b11a30c1e8ef1096010bf90d8dfad55f2ddc075279122c86553dba424fb4b"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.023550 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c6c32740-d479-4c2b-91ad-60bd6955ce11","Type":"ContainerStarted","Data":"1798d3597513fc9dde33cb30e1190f39b68a58a9ac4e6edca1c9b9e5f68e9c20"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.028316 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-55r6w" event={"ID":"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7","Type":"ContainerStarted","Data":"ed03a4dd57b03ecb6e34a44b6723c62a4e63a6750466b0f409342084f9b282d6"} Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.037356 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell2-novncproxy-0" podStartSLOduration=2.9422824309999998 podStartE2EDuration="6.037338343s" podCreationTimestamp="2025-10-14 09:06:51 +0000 UTC" firstStartedPulling="2025-10-14 09:06:52.531450194 +0000 UTC m=+8360.442534000" lastFinishedPulling="2025-10-14 09:06:55.626506066 +0000 UTC m=+8363.537589912" observedRunningTime="2025-10-14 09:06:57.032874816 +0000 UTC m=+8364.943958622" watchObservedRunningTime="2025-10-14 09:06:57.037338343 +0000 UTC m=+8364.948422149" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.055460 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" podStartSLOduration=6.055440371 podStartE2EDuration="6.055440371s" podCreationTimestamp="2025-10-14 09:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:06:57.050462358 +0000 UTC m=+8364.961546164" watchObservedRunningTime="2025-10-14 09:06:57.055440371 +0000 UTC m=+8364.966524177" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.069533 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.807636101 podStartE2EDuration="6.069519023s" podCreationTimestamp="2025-10-14 09:06:51 +0000 UTC" firstStartedPulling="2025-10-14 09:06:52.376706319 +0000 UTC m=+8360.287790125" lastFinishedPulling="2025-10-14 09:06:55.638589201 +0000 UTC m=+8363.549673047" observedRunningTime="2025-10-14 09:06:57.06590794 +0000 UTC m=+8364.976991766" watchObservedRunningTime="2025-10-14 09:06:57.069519023 +0000 UTC m=+8364.980602829" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.099801 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.720073637 podStartE2EDuration="6.099772358s" podCreationTimestamp="2025-10-14 09:06:51 +0000 UTC" firstStartedPulling="2025-10-14 09:06:52.171397669 +0000 UTC m=+8360.082481475" lastFinishedPulling="2025-10-14 09:06:55.55109635 +0000 UTC m=+8363.462180196" observedRunningTime="2025-10-14 09:06:57.088549977 +0000 UTC m=+8364.999633783" watchObservedRunningTime="2025-10-14 09:06:57.099772358 +0000 UTC m=+8365.010856164" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.111923 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.271037001 podStartE2EDuration="6.111907085s" podCreationTimestamp="2025-10-14 09:06:51 +0000 UTC" firstStartedPulling="2025-10-14 09:06:52.785548249 +0000 UTC m=+8360.696632055" lastFinishedPulling="2025-10-14 09:06:55.626418313 +0000 UTC m=+8363.537502139" observedRunningTime="2025-10-14 09:06:57.109065834 +0000 UTC m=+8365.020149660" watchObservedRunningTime="2025-10-14 09:06:57.111907085 +0000 UTC m=+8365.022990891" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.128820 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-55r6w" podStartSLOduration=5.128786348 podStartE2EDuration="5.128786348s" podCreationTimestamp="2025-10-14 09:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:06:57.122086656 +0000 UTC m=+8365.033170472" watchObservedRunningTime="2025-10-14 09:06:57.128786348 +0000 UTC m=+8365.039870154" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.140987 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell3-novncproxy-0" podStartSLOduration=3.308716578 podStartE2EDuration="6.140973016s" podCreationTimestamp="2025-10-14 09:06:51 +0000 UTC" firstStartedPulling="2025-10-14 09:06:52.785502118 +0000 UTC m=+8360.696585914" lastFinishedPulling="2025-10-14 09:06:55.617758516 +0000 UTC m=+8363.528842352" observedRunningTime="2025-10-14 09:06:57.133438191 +0000 UTC m=+8365.044521997" watchObservedRunningTime="2025-10-14 09:06:57.140973016 +0000 UTC m=+8365.052056822" Oct 14 09:06:57 crc kubenswrapper[5058]: I1014 09:06:57.154171 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.712334927 podStartE2EDuration="6.154152513s" podCreationTimestamp="2025-10-14 09:06:51 +0000 UTC" firstStartedPulling="2025-10-14 09:06:52.134827774 +0000 UTC m=+8360.045911580" lastFinishedPulling="2025-10-14 09:06:55.57664535 +0000 UTC m=+8363.487729166" observedRunningTime="2025-10-14 09:06:57.150921811 +0000 UTC m=+8365.062005617" watchObservedRunningTime="2025-10-14 09:06:57.154152513 +0000 UTC m=+8365.065236329" Oct 14 09:06:58 crc kubenswrapper[5058]: I1014 09:06:58.038517 5058 generic.go:334] "Generic (PLEG): container finished" podID="ed21e369-4fc4-44d6-a38a-1740d99705bf" containerID="6103137bfa1c07719266ed40132a241f1f122352577df59ec1353520b3bfa564" exitCode=0 Oct 14 09:06:58 crc kubenswrapper[5058]: I1014 09:06:58.038576 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h584n" event={"ID":"ed21e369-4fc4-44d6-a38a-1740d99705bf","Type":"ContainerDied","Data":"6103137bfa1c07719266ed40132a241f1f122352577df59ec1353520b3bfa564"} Oct 14 09:06:58 crc kubenswrapper[5058]: I1014 09:06:58.040838 5058 generic.go:334] "Generic (PLEG): container finished" podID="dfebb4c9-3b48-4463-8e95-d6b68646ba51" containerID="6ea0c5e0ab4bfd6eb35787bddd607712b68538cc9fade45b0ba5f50827d3c3c0" exitCode=0 Oct 14 09:06:58 crc kubenswrapper[5058]: I1014 09:06:58.041651 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-db-sync-rmkxf" event={"ID":"dfebb4c9-3b48-4463-8e95-d6b68646ba51","Type":"ContainerDied","Data":"6ea0c5e0ab4bfd6eb35787bddd607712b68538cc9fade45b0ba5f50827d3c3c0"} Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.058096 5058 generic.go:334] "Generic (PLEG): container finished" podID="0fa59b2f-7cff-4d52-826e-eda45ae51a10" containerID="c0ae6a2c000919076b9931d9f0747226e73fe7ed486da06a47ca43204b52b59f" exitCode=0 Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.058252 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-db-sync-7hvm2" event={"ID":"0fa59b2f-7cff-4d52-826e-eda45ae51a10","Type":"ContainerDied","Data":"c0ae6a2c000919076b9931d9f0747226e73fe7ed486da06a47ca43204b52b59f"} Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.061008 5058 generic.go:334] "Generic (PLEG): container finished" podID="ab8abb55-36b8-4dfe-8594-22d8bc9a15e7" containerID="ed03a4dd57b03ecb6e34a44b6723c62a4e63a6750466b0f409342084f9b282d6" exitCode=0 Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.061157 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-55r6w" event={"ID":"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7","Type":"ContainerDied","Data":"ed03a4dd57b03ecb6e34a44b6723c62a4e63a6750466b0f409342084f9b282d6"} Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.464867 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.585504 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.638395 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-combined-ca-bundle\") pod \"ed21e369-4fc4-44d6-a38a-1740d99705bf\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.638466 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wnxf\" (UniqueName: \"kubernetes.io/projected/ed21e369-4fc4-44d6-a38a-1740d99705bf-kube-api-access-9wnxf\") pod \"ed21e369-4fc4-44d6-a38a-1740d99705bf\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.638514 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-scripts\") pod \"ed21e369-4fc4-44d6-a38a-1740d99705bf\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.638669 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-config-data\") pod \"ed21e369-4fc4-44d6-a38a-1740d99705bf\" (UID: \"ed21e369-4fc4-44d6-a38a-1740d99705bf\") " Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.644490 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-scripts" (OuterVolumeSpecName: "scripts") pod "ed21e369-4fc4-44d6-a38a-1740d99705bf" (UID: "ed21e369-4fc4-44d6-a38a-1740d99705bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.644712 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed21e369-4fc4-44d6-a38a-1740d99705bf-kube-api-access-9wnxf" (OuterVolumeSpecName: "kube-api-access-9wnxf") pod "ed21e369-4fc4-44d6-a38a-1740d99705bf" (UID: "ed21e369-4fc4-44d6-a38a-1740d99705bf"). InnerVolumeSpecName "kube-api-access-9wnxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.667665 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-config-data" (OuterVolumeSpecName: "config-data") pod "ed21e369-4fc4-44d6-a38a-1740d99705bf" (UID: "ed21e369-4fc4-44d6-a38a-1740d99705bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.673214 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed21e369-4fc4-44d6-a38a-1740d99705bf" (UID: "ed21e369-4fc4-44d6-a38a-1740d99705bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.740415 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqg8t\" (UniqueName: \"kubernetes.io/projected/dfebb4c9-3b48-4463-8e95-d6b68646ba51-kube-api-access-tqg8t\") pod \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.740560 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-combined-ca-bundle\") pod \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.740770 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-config-data\") pod \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.740916 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-scripts\") pod \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\" (UID: \"dfebb4c9-3b48-4463-8e95-d6b68646ba51\") " Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.741651 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.741680 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.741700 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wnxf\" (UniqueName: \"kubernetes.io/projected/ed21e369-4fc4-44d6-a38a-1740d99705bf-kube-api-access-9wnxf\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.741718 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e369-4fc4-44d6-a38a-1740d99705bf-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.743868 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-scripts" (OuterVolumeSpecName: "scripts") pod "dfebb4c9-3b48-4463-8e95-d6b68646ba51" (UID: "dfebb4c9-3b48-4463-8e95-d6b68646ba51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.744260 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfebb4c9-3b48-4463-8e95-d6b68646ba51-kube-api-access-tqg8t" (OuterVolumeSpecName: "kube-api-access-tqg8t") pod "dfebb4c9-3b48-4463-8e95-d6b68646ba51" (UID: "dfebb4c9-3b48-4463-8e95-d6b68646ba51"). InnerVolumeSpecName "kube-api-access-tqg8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.771591 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfebb4c9-3b48-4463-8e95-d6b68646ba51" (UID: "dfebb4c9-3b48-4463-8e95-d6b68646ba51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.784622 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-config-data" (OuterVolumeSpecName: "config-data") pod "dfebb4c9-3b48-4463-8e95-d6b68646ba51" (UID: "dfebb4c9-3b48-4463-8e95-d6b68646ba51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.843627 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.843679 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.843701 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqg8t\" (UniqueName: \"kubernetes.io/projected/dfebb4c9-3b48-4463-8e95-d6b68646ba51-kube-api-access-tqg8t\") on node \"crc\" DevicePath \"\"" Oct 14 09:06:59 crc kubenswrapper[5058]: I1014 09:06:59.843721 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebb4c9-3b48-4463-8e95-d6b68646ba51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.077556 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-db-sync-rmkxf" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.077534 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-db-sync-rmkxf" event={"ID":"dfebb4c9-3b48-4463-8e95-d6b68646ba51","Type":"ContainerDied","Data":"e13089a082d2cbd3affa03a2e4b09ab9adb3971bd408648b7ed1a02a78531060"} Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.078835 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e13089a082d2cbd3affa03a2e4b09ab9adb3971bd408648b7ed1a02a78531060" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.083380 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h584n" event={"ID":"ed21e369-4fc4-44d6-a38a-1740d99705bf","Type":"ContainerDied","Data":"d8f523441e7a8080a2b2c32db74e40be98edd88759c34451d72ae4c0fe8a4fb7"} Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.083477 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f523441e7a8080a2b2c32db74e40be98edd88759c34451d72ae4c0fe8a4fb7" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.083588 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h584n" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.176549 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell3-conductor-0"] Oct 14 09:07:00 crc kubenswrapper[5058]: E1014 09:07:00.177066 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed21e369-4fc4-44d6-a38a-1740d99705bf" containerName="nova-manage" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.177082 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed21e369-4fc4-44d6-a38a-1740d99705bf" containerName="nova-manage" Oct 14 09:07:00 crc kubenswrapper[5058]: E1014 09:07:00.177094 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfebb4c9-3b48-4463-8e95-d6b68646ba51" containerName="nova-cell3-conductor-db-sync" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.177101 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfebb4c9-3b48-4463-8e95-d6b68646ba51" containerName="nova-cell3-conductor-db-sync" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.177271 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfebb4c9-3b48-4463-8e95-d6b68646ba51" containerName="nova-cell3-conductor-db-sync" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.177293 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed21e369-4fc4-44d6-a38a-1740d99705bf" containerName="nova-manage" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.177933 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.180091 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell3-conductor-config-data" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.202901 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-conductor-0"] Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.354254 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-combined-ca-bundle\") pod \"nova-cell3-conductor-0\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.354345 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-config-data\") pod \"nova-cell3-conductor-0\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.354376 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54wp7\" (UniqueName: \"kubernetes.io/projected/82d7fde3-d8a9-40c9-a83c-4003d297615e-kube-api-access-54wp7\") pod \"nova-cell3-conductor-0\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.361884 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.362124 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerName="nova-api-log" containerID="cri-o://c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a" gracePeriod=30 Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.362559 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerName="nova-api-api" containerID="cri-o://d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a" gracePeriod=30 Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.388203 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.388386 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="72c151ec-f2c9-4521-9b3f-3082653b874e" containerName="nova-scheduler-scheduler" containerID="cri-o://eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3" gracePeriod=30 Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.423411 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.423886 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c815dd4e-606d-4637-aca7-117d577c6c74" containerName="nova-metadata-log" containerID="cri-o://eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12" gracePeriod=30 Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.423962 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c815dd4e-606d-4637-aca7-117d577c6c74" containerName="nova-metadata-metadata" containerID="cri-o://9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b" gracePeriod=30 Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.468776 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-config-data\") pod \"nova-cell3-conductor-0\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.468838 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54wp7\" (UniqueName: \"kubernetes.io/projected/82d7fde3-d8a9-40c9-a83c-4003d297615e-kube-api-access-54wp7\") pod \"nova-cell3-conductor-0\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.468963 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-combined-ca-bundle\") pod \"nova-cell3-conductor-0\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.473545 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-config-data\") pod \"nova-cell3-conductor-0\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.489814 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-combined-ca-bundle\") pod \"nova-cell3-conductor-0\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.496226 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54wp7\" (UniqueName: \"kubernetes.io/projected/82d7fde3-d8a9-40c9-a83c-4003d297615e-kube-api-access-54wp7\") pod \"nova-cell3-conductor-0\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.505849 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.810344 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.812520 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.872203 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.980788 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-scripts\") pod \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.980906 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd743f1a-d12a-426c-9603-86c7c8092a3b-logs\") pod \"fd743f1a-d12a-426c-9603-86c7c8092a3b\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.980925 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6g6s\" (UniqueName: \"kubernetes.io/projected/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-kube-api-access-l6g6s\") pod \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.980963 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-config-data\") pod \"fd743f1a-d12a-426c-9603-86c7c8092a3b\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.981024 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-combined-ca-bundle\") pod \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.981101 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrd27\" (UniqueName: \"kubernetes.io/projected/0fa59b2f-7cff-4d52-826e-eda45ae51a10-kube-api-access-qrd27\") pod \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.981117 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-scripts\") pod \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.981149 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-combined-ca-bundle\") pod \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.981182 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-config-data\") pod \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\" (UID: \"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.981284 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjmsv\" (UniqueName: \"kubernetes.io/projected/fd743f1a-d12a-426c-9603-86c7c8092a3b-kube-api-access-vjmsv\") pod \"fd743f1a-d12a-426c-9603-86c7c8092a3b\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.981350 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-combined-ca-bundle\") pod \"fd743f1a-d12a-426c-9603-86c7c8092a3b\" (UID: \"fd743f1a-d12a-426c-9603-86c7c8092a3b\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.981380 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-config-data\") pod \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\" (UID: \"0fa59b2f-7cff-4d52-826e-eda45ae51a10\") " Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.982933 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.989457 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd743f1a-d12a-426c-9603-86c7c8092a3b-kube-api-access-vjmsv" (OuterVolumeSpecName: "kube-api-access-vjmsv") pod "fd743f1a-d12a-426c-9603-86c7c8092a3b" (UID: "fd743f1a-d12a-426c-9603-86c7c8092a3b"). InnerVolumeSpecName "kube-api-access-vjmsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.989478 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-kube-api-access-l6g6s" (OuterVolumeSpecName: "kube-api-access-l6g6s") pod "ab8abb55-36b8-4dfe-8594-22d8bc9a15e7" (UID: "ab8abb55-36b8-4dfe-8594-22d8bc9a15e7"). InnerVolumeSpecName "kube-api-access-l6g6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.989727 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd743f1a-d12a-426c-9603-86c7c8092a3b-logs" (OuterVolumeSpecName: "logs") pod "fd743f1a-d12a-426c-9603-86c7c8092a3b" (UID: "fd743f1a-d12a-426c-9603-86c7c8092a3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.992958 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-scripts" (OuterVolumeSpecName: "scripts") pod "ab8abb55-36b8-4dfe-8594-22d8bc9a15e7" (UID: "ab8abb55-36b8-4dfe-8594-22d8bc9a15e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:00 crc kubenswrapper[5058]: I1014 09:07:00.996015 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa59b2f-7cff-4d52-826e-eda45ae51a10-kube-api-access-qrd27" (OuterVolumeSpecName: "kube-api-access-qrd27") pod "0fa59b2f-7cff-4d52-826e-eda45ae51a10" (UID: "0fa59b2f-7cff-4d52-826e-eda45ae51a10"). InnerVolumeSpecName "kube-api-access-qrd27". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.001983 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-scripts" (OuterVolumeSpecName: "scripts") pod "0fa59b2f-7cff-4d52-826e-eda45ae51a10" (UID: "0fa59b2f-7cff-4d52-826e-eda45ae51a10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.020713 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-config-data" (OuterVolumeSpecName: "config-data") pod "ab8abb55-36b8-4dfe-8594-22d8bc9a15e7" (UID: "ab8abb55-36b8-4dfe-8594-22d8bc9a15e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.026462 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-config-data" (OuterVolumeSpecName: "config-data") pod "0fa59b2f-7cff-4d52-826e-eda45ae51a10" (UID: "0fa59b2f-7cff-4d52-826e-eda45ae51a10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.027971 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab8abb55-36b8-4dfe-8594-22d8bc9a15e7" (UID: "ab8abb55-36b8-4dfe-8594-22d8bc9a15e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.028158 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd743f1a-d12a-426c-9603-86c7c8092a3b" (UID: "fd743f1a-d12a-426c-9603-86c7c8092a3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.030818 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-config-data" (OuterVolumeSpecName: "config-data") pod "fd743f1a-d12a-426c-9603-86c7c8092a3b" (UID: "fd743f1a-d12a-426c-9603-86c7c8092a3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.039983 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fa59b2f-7cff-4d52-826e-eda45ae51a10" (UID: "0fa59b2f-7cff-4d52-826e-eda45ae51a10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.082884 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-config-data\") pod \"c815dd4e-606d-4637-aca7-117d577c6c74\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083061 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c815dd4e-606d-4637-aca7-117d577c6c74-logs\") pod \"c815dd4e-606d-4637-aca7-117d577c6c74\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083088 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l45j8\" (UniqueName: \"kubernetes.io/projected/c815dd4e-606d-4637-aca7-117d577c6c74-kube-api-access-l45j8\") pod \"c815dd4e-606d-4637-aca7-117d577c6c74\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083156 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-combined-ca-bundle\") pod \"c815dd4e-606d-4637-aca7-117d577c6c74\" (UID: \"c815dd4e-606d-4637-aca7-117d577c6c74\") " Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083579 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c815dd4e-606d-4637-aca7-117d577c6c74-logs" (OuterVolumeSpecName: "logs") pod "c815dd4e-606d-4637-aca7-117d577c6c74" (UID: "c815dd4e-606d-4637-aca7-117d577c6c74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083641 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083657 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd743f1a-d12a-426c-9603-86c7c8092a3b-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083668 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6g6s\" (UniqueName: \"kubernetes.io/projected/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-kube-api-access-l6g6s\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083678 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083686 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083694 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrd27\" (UniqueName: \"kubernetes.io/projected/0fa59b2f-7cff-4d52-826e-eda45ae51a10-kube-api-access-qrd27\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083704 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083712 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083721 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083730 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjmsv\" (UniqueName: \"kubernetes.io/projected/fd743f1a-d12a-426c-9603-86c7c8092a3b-kube-api-access-vjmsv\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083738 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd743f1a-d12a-426c-9603-86c7c8092a3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.083746 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa59b2f-7cff-4d52-826e-eda45ae51a10-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.087759 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c815dd4e-606d-4637-aca7-117d577c6c74-kube-api-access-l45j8" (OuterVolumeSpecName: "kube-api-access-l45j8") pod "c815dd4e-606d-4637-aca7-117d577c6c74" (UID: "c815dd4e-606d-4637-aca7-117d577c6c74"). InnerVolumeSpecName "kube-api-access-l45j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.092657 5058 generic.go:334] "Generic (PLEG): container finished" podID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerID="d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a" exitCode=0 Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.092694 5058 generic.go:334] "Generic (PLEG): container finished" podID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerID="c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a" exitCode=143 Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.092761 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd743f1a-d12a-426c-9603-86c7c8092a3b","Type":"ContainerDied","Data":"d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a"} Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.092809 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd743f1a-d12a-426c-9603-86c7c8092a3b","Type":"ContainerDied","Data":"c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a"} Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.092824 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd743f1a-d12a-426c-9603-86c7c8092a3b","Type":"ContainerDied","Data":"69ca8dfcd42b35e444c54f39f2889c2d1e2c873b7b2a115f0b2f8fba4a8844ba"} Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.092844 5058 scope.go:117] "RemoveContainer" containerID="d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.092954 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.095528 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-db-sync-7hvm2" event={"ID":"0fa59b2f-7cff-4d52-826e-eda45ae51a10","Type":"ContainerDied","Data":"b2e02ee5f8f8ceea6dee782a67bfc196552ce6402b7076d7ee19be7016e8d7f0"} Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.095555 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2e02ee5f8f8ceea6dee782a67bfc196552ce6402b7076d7ee19be7016e8d7f0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.095564 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-db-sync-7hvm2" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.104617 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-55r6w" event={"ID":"ab8abb55-36b8-4dfe-8594-22d8bc9a15e7","Type":"ContainerDied","Data":"c63ed0f1b04fabc42bdab392b76e92e0e2cbdb9f95766e774f9d9f46abcd9d64"} Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.104663 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c63ed0f1b04fabc42bdab392b76e92e0e2cbdb9f95766e774f9d9f46abcd9d64" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.104728 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-55r6w" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.140046 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c815dd4e-606d-4637-aca7-117d577c6c74" (UID: "c815dd4e-606d-4637-aca7-117d577c6c74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.147903 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-config-data" (OuterVolumeSpecName: "config-data") pod "c815dd4e-606d-4637-aca7-117d577c6c74" (UID: "c815dd4e-606d-4637-aca7-117d577c6c74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.148705 5058 generic.go:334] "Generic (PLEG): container finished" podID="c815dd4e-606d-4637-aca7-117d577c6c74" containerID="9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b" exitCode=0 Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.148737 5058 generic.go:334] "Generic (PLEG): container finished" podID="c815dd4e-606d-4637-aca7-117d577c6c74" containerID="eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12" exitCode=143 Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.148759 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c815dd4e-606d-4637-aca7-117d577c6c74","Type":"ContainerDied","Data":"9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b"} Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.148786 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c815dd4e-606d-4637-aca7-117d577c6c74","Type":"ContainerDied","Data":"eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12"} Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.148813 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c815dd4e-606d-4637-aca7-117d577c6c74","Type":"ContainerDied","Data":"727ed923c45f90d5d077bf54afae9301fa8e14894044bec84cf9b206ce0b1f3f"} Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.148921 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.152546 5058 scope.go:117] "RemoveContainer" containerID="c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.184536 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-conductor-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.187192 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c815dd4e-606d-4637-aca7-117d577c6c74-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.187228 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l45j8\" (UniqueName: \"kubernetes.io/projected/c815dd4e-606d-4637-aca7-117d577c6c74-kube-api-access-l45j8\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.187240 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.187251 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c815dd4e-606d-4637-aca7-117d577c6c74-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.204662 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell2-conductor-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.205131 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8abb55-36b8-4dfe-8594-22d8bc9a15e7" containerName="nova-cell1-conductor-db-sync" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205153 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8abb55-36b8-4dfe-8594-22d8bc9a15e7" containerName="nova-cell1-conductor-db-sync" Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.205174 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerName="nova-api-log" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205182 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerName="nova-api-log" Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.205215 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa59b2f-7cff-4d52-826e-eda45ae51a10" containerName="nova-cell2-conductor-db-sync" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205223 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa59b2f-7cff-4d52-826e-eda45ae51a10" containerName="nova-cell2-conductor-db-sync" Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.205239 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c815dd4e-606d-4637-aca7-117d577c6c74" containerName="nova-metadata-metadata" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205247 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c815dd4e-606d-4637-aca7-117d577c6c74" containerName="nova-metadata-metadata" Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.205266 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c815dd4e-606d-4637-aca7-117d577c6c74" containerName="nova-metadata-log" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205273 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="c815dd4e-606d-4637-aca7-117d577c6c74" containerName="nova-metadata-log" Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.205301 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerName="nova-api-api" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205308 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerName="nova-api-api" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205542 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c815dd4e-606d-4637-aca7-117d577c6c74" containerName="nova-metadata-metadata" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205561 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8abb55-36b8-4dfe-8594-22d8bc9a15e7" containerName="nova-cell1-conductor-db-sync" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205578 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="c815dd4e-606d-4637-aca7-117d577c6c74" containerName="nova-metadata-log" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205594 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerName="nova-api-log" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205611 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa59b2f-7cff-4d52-826e-eda45ae51a10" containerName="nova-cell2-conductor-db-sync" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.205627 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd743f1a-d12a-426c-9603-86c7c8092a3b" containerName="nova-api-api" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.206373 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.206607 5058 scope.go:117] "RemoveContainer" containerID="d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a" Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.207480 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a\": container with ID starting with d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a not found: ID does not exist" containerID="d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.207526 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a"} err="failed to get container status \"d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a\": rpc error: code = NotFound desc = could not find container \"d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a\": container with ID starting with d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a not found: ID does not exist" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.207547 5058 scope.go:117] "RemoveContainer" containerID="c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.213669 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-conductor-config-data" Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.214553 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a\": container with ID starting with c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a not found: ID does not exist" containerID="c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.218436 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a"} err="failed to get container status \"c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a\": rpc error: code = NotFound desc = could not find container \"c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a\": container with ID starting with c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a not found: ID does not exist" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.218473 5058 scope.go:117] "RemoveContainer" containerID="d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.229447 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-conductor-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.235979 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a"} err="failed to get container status \"d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a\": rpc error: code = NotFound desc = could not find container \"d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a\": container with ID starting with d51385bb075d69fde9e1cb101cf2f3e76337dd8bcee9c184736a905a2f4d694a not found: ID does not exist" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.236019 5058 scope.go:117] "RemoveContainer" containerID="c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.237250 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a"} err="failed to get container status \"c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a\": rpc error: code = NotFound desc = could not find container \"c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a\": container with ID starting with c52ef02a85d038ced94f448c21637451cee34bbfd516731924af292bc7273a5a not found: ID does not exist" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.237274 5058 scope.go:117] "RemoveContainer" containerID="9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.246182 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.253363 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.259093 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.260852 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.264096 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.269153 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.277311 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.286672 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.289260 5058 scope.go:117] "RemoveContainer" containerID="eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.294684 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.296403 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.298213 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.314624 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.317396 5058 scope.go:117] "RemoveContainer" containerID="9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b" Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.317720 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b\": container with ID starting with 9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b not found: ID does not exist" containerID="9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.317749 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b"} err="failed to get container status \"9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b\": rpc error: code = NotFound desc = could not find container \"9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b\": container with ID starting with 9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b not found: ID does not exist" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.317768 5058 scope.go:117] "RemoveContainer" containerID="eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12" Oct 14 09:07:01 crc kubenswrapper[5058]: E1014 09:07:01.318163 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12\": container with ID starting with eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12 not found: ID does not exist" containerID="eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.318180 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12"} err="failed to get container status \"eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12\": rpc error: code = NotFound desc = could not find container \"eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12\": container with ID starting with eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12 not found: ID does not exist" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.318193 5058 scope.go:117] "RemoveContainer" containerID="9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.318431 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b"} err="failed to get container status \"9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b\": rpc error: code = NotFound desc = could not find container \"9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b\": container with ID starting with 9d1745e1b76ecd67291e4a103fc1fc172666d9045c90c366cc0f8cd5bf864e8b not found: ID does not exist" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.318446 5058 scope.go:117] "RemoveContainer" containerID="eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.318744 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12"} err="failed to get container status \"eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12\": rpc error: code = NotFound desc = could not find container \"eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12\": container with ID starting with eba2ff832d10feee326eec88bcf5e3f99556f0ae56abe250ef9da7fe48dfdf12 not found: ID does not exist" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.391264 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af479b63-5f6c-43b3-9597-c897502d4457-logs\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.391349 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zlg7\" (UniqueName: \"kubernetes.io/projected/af479b63-5f6c-43b3-9597-c897502d4457-kube-api-access-6zlg7\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.391380 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.391464 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slv8v\" (UniqueName: \"kubernetes.io/projected/b0f08db2-363f-4b5f-be21-65f911ae2fba-kube-api-access-slv8v\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.391519 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-config-data\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.391607 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f08db2-363f-4b5f-be21-65f911ae2fba-logs\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.391680 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv5lj\" (UniqueName: \"kubernetes.io/projected/3a70199b-22be-47f0-b8b1-75b894258c1a-kube-api-access-rv5lj\") pod \"nova-cell2-conductor-0\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.391793 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-config-data\") pod \"nova-cell2-conductor-0\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.391987 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-combined-ca-bundle\") pod \"nova-cell2-conductor-0\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.392050 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.392258 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-config-data\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.462315 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.462823 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.472508 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494230 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f08db2-363f-4b5f-be21-65f911ae2fba-logs\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494308 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv5lj\" (UniqueName: \"kubernetes.io/projected/3a70199b-22be-47f0-b8b1-75b894258c1a-kube-api-access-rv5lj\") pod \"nova-cell2-conductor-0\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494376 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-config-data\") pod \"nova-cell2-conductor-0\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494397 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-combined-ca-bundle\") pod \"nova-cell2-conductor-0\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494425 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494486 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-config-data\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494552 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af479b63-5f6c-43b3-9597-c897502d4457-logs\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494575 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zlg7\" (UniqueName: \"kubernetes.io/projected/af479b63-5f6c-43b3-9597-c897502d4457-kube-api-access-6zlg7\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494595 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494617 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slv8v\" (UniqueName: \"kubernetes.io/projected/b0f08db2-363f-4b5f-be21-65f911ae2fba-kube-api-access-slv8v\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.494639 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-config-data\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.496394 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f08db2-363f-4b5f-be21-65f911ae2fba-logs\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.499983 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-config-data\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.501138 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-config-data\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.502874 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-config-data\") pod \"nova-cell2-conductor-0\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.503504 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.504245 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.506332 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af479b63-5f6c-43b3-9597-c897502d4457-logs\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.507598 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.521001 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-combined-ca-bundle\") pod \"nova-cell2-conductor-0\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.526097 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv5lj\" (UniqueName: \"kubernetes.io/projected/3a70199b-22be-47f0-b8b1-75b894258c1a-kube-api-access-rv5lj\") pod \"nova-cell2-conductor-0\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.526201 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zlg7\" (UniqueName: \"kubernetes.io/projected/af479b63-5f6c-43b3-9597-c897502d4457-kube-api-access-6zlg7\") pod \"nova-metadata-0\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.531455 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slv8v\" (UniqueName: \"kubernetes.io/projected/b0f08db2-363f-4b5f-be21-65f911ae2fba-kube-api-access-slv8v\") pod \"nova-api-0\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.563718 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.584081 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.630479 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.892337 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.892631 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.901589 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.902750 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.913608 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.916212 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.923873 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.946011 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.952085 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.953895 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.955142 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:07:01 crc kubenswrapper[5058]: I1014 09:07:01.991926 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.013637 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.013908 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.014065 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm249\" (UniqueName: \"kubernetes.io/projected/68be61db-e0a0-4286-80c3-b0ffeb8498d4-kube-api-access-zm249\") pod \"nova-cell1-conductor-0\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.028430 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457c8cd99-jsz2c"] Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.028869 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" podUID="e5110179-f50d-468f-a925-adb04c2d62db" containerName="dnsmasq-dns" containerID="cri-o://539db843c3a72cd66c250fb1d6653beaa3a70150a825c024b66a71d2af7b3208" gracePeriod=10 Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.117067 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-config-data\") pod \"72c151ec-f2c9-4521-9b3f-3082653b874e\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.117203 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-combined-ca-bundle\") pod \"72c151ec-f2c9-4521-9b3f-3082653b874e\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.117350 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7jpj\" (UniqueName: \"kubernetes.io/projected/72c151ec-f2c9-4521-9b3f-3082653b874e-kube-api-access-v7jpj\") pod \"72c151ec-f2c9-4521-9b3f-3082653b874e\" (UID: \"72c151ec-f2c9-4521-9b3f-3082653b874e\") " Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.117763 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.117849 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.118845 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm249\" (UniqueName: \"kubernetes.io/projected/68be61db-e0a0-4286-80c3-b0ffeb8498d4-kube-api-access-zm249\") pod \"nova-cell1-conductor-0\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.123355 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.123422 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c151ec-f2c9-4521-9b3f-3082653b874e-kube-api-access-v7jpj" (OuterVolumeSpecName: "kube-api-access-v7jpj") pod "72c151ec-f2c9-4521-9b3f-3082653b874e" (UID: "72c151ec-f2c9-4521-9b3f-3082653b874e"). InnerVolumeSpecName "kube-api-access-v7jpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.125721 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.135908 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm249\" (UniqueName: \"kubernetes.io/projected/68be61db-e0a0-4286-80c3-b0ffeb8498d4-kube-api-access-zm249\") pod \"nova-cell1-conductor-0\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.150366 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72c151ec-f2c9-4521-9b3f-3082653b874e" (UID: "72c151ec-f2c9-4521-9b3f-3082653b874e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.181818 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-config-data" (OuterVolumeSpecName: "config-data") pod "72c151ec-f2c9-4521-9b3f-3082653b874e" (UID: "72c151ec-f2c9-4521-9b3f-3082653b874e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.182991 5058 generic.go:334] "Generic (PLEG): container finished" podID="e5110179-f50d-468f-a925-adb04c2d62db" containerID="539db843c3a72cd66c250fb1d6653beaa3a70150a825c024b66a71d2af7b3208" exitCode=0 Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.183182 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" event={"ID":"e5110179-f50d-468f-a925-adb04c2d62db","Type":"ContainerDied","Data":"539db843c3a72cd66c250fb1d6653beaa3a70150a825c024b66a71d2af7b3208"} Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.191772 5058 generic.go:334] "Generic (PLEG): container finished" podID="72c151ec-f2c9-4521-9b3f-3082653b874e" containerID="eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3" exitCode=0 Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.191859 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"72c151ec-f2c9-4521-9b3f-3082653b874e","Type":"ContainerDied","Data":"eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3"} Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.191891 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"72c151ec-f2c9-4521-9b3f-3082653b874e","Type":"ContainerDied","Data":"53f8908a3cb7bee929e2c636f7c850078298ab6b4aba651cb847998f3ca30bd6"} Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.191911 5058 scope.go:117] "RemoveContainer" containerID="eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.192107 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: W1014 09:07:02.193507 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a70199b_22be_47f0_b8b1_75b894258c1a.slice/crio-7eebf5b7521cccfd8bcb773dbfa7041487c10ee7203fed79c6b35102c8cf2bf8 WatchSource:0}: Error finding container 7eebf5b7521cccfd8bcb773dbfa7041487c10ee7203fed79c6b35102c8cf2bf8: Status 404 returned error can't find the container with id 7eebf5b7521cccfd8bcb773dbfa7041487c10ee7203fed79c6b35102c8cf2bf8 Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.195690 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-conductor-0"] Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.205478 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-0" event={"ID":"82d7fde3-d8a9-40c9-a83c-4003d297615e","Type":"ContainerStarted","Data":"b0260427a1bb46b0a0e2780827b516abeb2abb961f9f6a656ac31cb58c58d269"} Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.205525 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-0" event={"ID":"82d7fde3-d8a9-40c9-a83c-4003d297615e","Type":"ContainerStarted","Data":"c366e03ef632cec7cfc071c6caa67ffc06b60df4c35a33291a98817d0bdd7c4a"} Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.207617 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.210114 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.217507 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell3-novncproxy-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.219103 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell2-novncproxy-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.221123 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.221150 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c151ec-f2c9-4521-9b3f-3082653b874e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.221164 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7jpj\" (UniqueName: \"kubernetes.io/projected/72c151ec-f2c9-4521-9b3f-3082653b874e-kube-api-access-v7jpj\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.227596 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell3-conductor-0" podStartSLOduration=2.22757366 podStartE2EDuration="2.22757366s" podCreationTimestamp="2025-10-14 09:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:02.221552718 +0000 UTC m=+8370.132636534" watchObservedRunningTime="2025-10-14 09:07:02.22757366 +0000 UTC m=+8370.138657466" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.228066 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.244529 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.277183 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.282297 5058 scope.go:117] "RemoveContainer" containerID="eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3" Oct 14 09:07:02 crc kubenswrapper[5058]: E1014 09:07:02.290567 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3\": container with ID starting with eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3 not found: ID does not exist" containerID="eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.290623 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3"} err="failed to get container status \"eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3\": rpc error: code = NotFound desc = could not find container \"eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3\": container with ID starting with eba4b182c4387f8b55a35ab843088048ed587d7ad9b02ff88cfc4dabd7c3e3d3 not found: ID does not exist" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.295824 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.333435 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:02 crc kubenswrapper[5058]: E1014 09:07:02.333910 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c151ec-f2c9-4521-9b3f-3082653b874e" containerName="nova-scheduler-scheduler" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.333923 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c151ec-f2c9-4521-9b3f-3082653b874e" containerName="nova-scheduler-scheduler" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.334120 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c151ec-f2c9-4521-9b3f-3082653b874e" containerName="nova-scheduler-scheduler" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.334817 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.337727 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.346693 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.400017 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.425508 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxpkb\" (UniqueName: \"kubernetes.io/projected/970c4584-8ef4-44d1-8fd3-ff415a3987a1-kube-api-access-sxpkb\") pod \"nova-scheduler-0\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.433572 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.433988 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-config-data\") pod \"nova-scheduler-0\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.536928 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-config-data\") pod \"nova-scheduler-0\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.537008 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxpkb\" (UniqueName: \"kubernetes.io/projected/970c4584-8ef4-44d1-8fd3-ff415a3987a1-kube-api-access-sxpkb\") pod \"nova-scheduler-0\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.537077 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.546507 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-config-data\") pod \"nova-scheduler-0\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.546537 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.557669 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxpkb\" (UniqueName: \"kubernetes.io/projected/970c4584-8ef4-44d1-8fd3-ff415a3987a1-kube-api-access-sxpkb\") pod \"nova-scheduler-0\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.664303 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.732932 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.809678 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c151ec-f2c9-4521-9b3f-3082653b874e" path="/var/lib/kubelet/pods/72c151ec-f2c9-4521-9b3f-3082653b874e/volumes" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.812603 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c815dd4e-606d-4637-aca7-117d577c6c74" path="/var/lib/kubelet/pods/c815dd4e-606d-4637-aca7-117d577c6c74/volumes" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.813847 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd743f1a-d12a-426c-9603-86c7c8092a3b" path="/var/lib/kubelet/pods/fd743f1a-d12a-426c-9603-86c7c8092a3b/volumes" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.842016 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l78xl\" (UniqueName: \"kubernetes.io/projected/e5110179-f50d-468f-a925-adb04c2d62db-kube-api-access-l78xl\") pod \"e5110179-f50d-468f-a925-adb04c2d62db\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.842145 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-config\") pod \"e5110179-f50d-468f-a925-adb04c2d62db\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.842195 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-sb\") pod \"e5110179-f50d-468f-a925-adb04c2d62db\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.842256 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-nb\") pod \"e5110179-f50d-468f-a925-adb04c2d62db\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.842304 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-dns-svc\") pod \"e5110179-f50d-468f-a925-adb04c2d62db\" (UID: \"e5110179-f50d-468f-a925-adb04c2d62db\") " Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.884456 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.889278 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5110179-f50d-468f-a925-adb04c2d62db-kube-api-access-l78xl" (OuterVolumeSpecName: "kube-api-access-l78xl") pod "e5110179-f50d-468f-a925-adb04c2d62db" (UID: "e5110179-f50d-468f-a925-adb04c2d62db"). InnerVolumeSpecName "kube-api-access-l78xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:02 crc kubenswrapper[5058]: I1014 09:07:02.944497 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l78xl\" (UniqueName: \"kubernetes.io/projected/e5110179-f50d-468f-a925-adb04c2d62db-kube-api-access-l78xl\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.034430 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5110179-f50d-468f-a925-adb04c2d62db" (UID: "e5110179-f50d-468f-a925-adb04c2d62db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.046835 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.062461 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-config" (OuterVolumeSpecName: "config") pod "e5110179-f50d-468f-a925-adb04c2d62db" (UID: "e5110179-f50d-468f-a925-adb04c2d62db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.078476 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5110179-f50d-468f-a925-adb04c2d62db" (UID: "e5110179-f50d-468f-a925-adb04c2d62db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.101545 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5110179-f50d-468f-a925-adb04c2d62db" (UID: "e5110179-f50d-468f-a925-adb04c2d62db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.148904 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.148941 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.148954 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5110179-f50d-468f-a925-adb04c2d62db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.181590 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:03 crc kubenswrapper[5058]: W1014 09:07:03.190736 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod970c4584_8ef4_44d1_8fd3_ff415a3987a1.slice/crio-e4ba41021f6f1a7b490e9560389dca32c33a4c222fdf84bedaa152b25a28ff8a WatchSource:0}: Error finding container e4ba41021f6f1a7b490e9560389dca32c33a4c222fdf84bedaa152b25a28ff8a: Status 404 returned error can't find the container with id e4ba41021f6f1a7b490e9560389dca32c33a4c222fdf84bedaa152b25a28ff8a Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.221421 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0f08db2-363f-4b5f-be21-65f911ae2fba","Type":"ContainerStarted","Data":"eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.221463 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0f08db2-363f-4b5f-be21-65f911ae2fba","Type":"ContainerStarted","Data":"c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.221473 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0f08db2-363f-4b5f-be21-65f911ae2fba","Type":"ContainerStarted","Data":"1035da48744259d66c3555bfe52bc3f413d84ef3df919e86bca6d41e79780167"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.224967 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-0" event={"ID":"3a70199b-22be-47f0-b8b1-75b894258c1a","Type":"ContainerStarted","Data":"38bd7e9f5f204dca7ed11cec45a145977118f5eb5309800e690b1916e8bdf20b"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.225009 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-0" event={"ID":"3a70199b-22be-47f0-b8b1-75b894258c1a","Type":"ContainerStarted","Data":"7eebf5b7521cccfd8bcb773dbfa7041487c10ee7203fed79c6b35102c8cf2bf8"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.225769 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.234699 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"970c4584-8ef4-44d1-8fd3-ff415a3987a1","Type":"ContainerStarted","Data":"e4ba41021f6f1a7b490e9560389dca32c33a4c222fdf84bedaa152b25a28ff8a"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.236843 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.236831927 podStartE2EDuration="2.236831927s" podCreationTimestamp="2025-10-14 09:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:03.236365803 +0000 UTC m=+8371.147449609" watchObservedRunningTime="2025-10-14 09:07:03.236831927 +0000 UTC m=+8371.147915743" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.238777 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"68be61db-e0a0-4286-80c3-b0ffeb8498d4","Type":"ContainerStarted","Data":"5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.238833 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"68be61db-e0a0-4286-80c3-b0ffeb8498d4","Type":"ContainerStarted","Data":"71ffd59e1279f5ef9e08ecff962cb6e4f7684ba85269d6f3ced8ef0b47d66780"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.239636 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.252139 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" event={"ID":"e5110179-f50d-468f-a925-adb04c2d62db","Type":"ContainerDied","Data":"6659313c254ab97265b31739a7d432354ca3e646e97c1f9ba3b666e2190dc9fb"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.252205 5058 scope.go:117] "RemoveContainer" containerID="539db843c3a72cd66c250fb1d6653beaa3a70150a825c024b66a71d2af7b3208" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.252340 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457c8cd99-jsz2c" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.255234 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell2-conductor-0" podStartSLOduration=2.2552148020000002 podStartE2EDuration="2.255214802s" podCreationTimestamp="2025-10-14 09:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:03.25023199 +0000 UTC m=+8371.161315796" watchObservedRunningTime="2025-10-14 09:07:03.255214802 +0000 UTC m=+8371.166298608" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.263384 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af479b63-5f6c-43b3-9597-c897502d4457","Type":"ContainerStarted","Data":"cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.263419 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af479b63-5f6c-43b3-9597-c897502d4457","Type":"ContainerStarted","Data":"60452b04bd6ebc3a2cdde9b407557bd83da0b188371419308e6a9b769e0c7cd6"} Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.273905 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.273882746 podStartE2EDuration="2.273882746s" podCreationTimestamp="2025-10-14 09:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:03.268808651 +0000 UTC m=+8371.179892487" watchObservedRunningTime="2025-10-14 09:07:03.273882746 +0000 UTC m=+8371.184966552" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.282003 5058 scope.go:117] "RemoveContainer" containerID="ecb9dc79b19471c03ca5914d222875a6991c8225e341d415711ca03b171607d5" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.307180 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.307157818 podStartE2EDuration="2.307157818s" podCreationTimestamp="2025-10-14 09:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:03.296820362 +0000 UTC m=+8371.207904168" watchObservedRunningTime="2025-10-14 09:07:03.307157818 +0000 UTC m=+8371.218241624" Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.344533 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457c8cd99-jsz2c"] Oct 14 09:07:03 crc kubenswrapper[5058]: I1014 09:07:03.371714 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7457c8cd99-jsz2c"] Oct 14 09:07:04 crc kubenswrapper[5058]: I1014 09:07:04.287213 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"970c4584-8ef4-44d1-8fd3-ff415a3987a1","Type":"ContainerStarted","Data":"2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65"} Oct 14 09:07:04 crc kubenswrapper[5058]: I1014 09:07:04.297229 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af479b63-5f6c-43b3-9597-c897502d4457","Type":"ContainerStarted","Data":"d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5"} Oct 14 09:07:04 crc kubenswrapper[5058]: I1014 09:07:04.308340 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.308315611 podStartE2EDuration="2.308315611s" podCreationTimestamp="2025-10-14 09:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:04.301818126 +0000 UTC m=+8372.212901932" watchObservedRunningTime="2025-10-14 09:07:04.308315611 +0000 UTC m=+8372.219399417" Oct 14 09:07:04 crc kubenswrapper[5058]: I1014 09:07:04.809640 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5110179-f50d-468f-a925-adb04c2d62db" path="/var/lib/kubelet/pods/e5110179-f50d-468f-a925-adb04c2d62db/volumes" Oct 14 09:07:05 crc kubenswrapper[5058]: I1014 09:07:05.055750 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fbsbw"] Oct 14 09:07:05 crc kubenswrapper[5058]: I1014 09:07:05.066507 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fbsbw"] Oct 14 09:07:06 crc kubenswrapper[5058]: I1014 09:07:06.631505 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 09:07:06 crc kubenswrapper[5058]: I1014 09:07:06.631899 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 09:07:06 crc kubenswrapper[5058]: I1014 09:07:06.808354 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b791336a-d0a2-4a32-823d-6eb0dbc56438" path="/var/lib/kubelet/pods/b791336a-d0a2-4a32-823d-6eb0dbc56438/volumes" Oct 14 09:07:07 crc kubenswrapper[5058]: I1014 09:07:07.667501 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 09:07:10 crc kubenswrapper[5058]: I1014 09:07:10.540743 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell3-conductor-0" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.315585 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell3-cell-mapping-5cg8j"] Oct 14 09:07:11 crc kubenswrapper[5058]: E1014 09:07:11.316605 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5110179-f50d-468f-a925-adb04c2d62db" containerName="dnsmasq-dns" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.316635 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5110179-f50d-468f-a925-adb04c2d62db" containerName="dnsmasq-dns" Oct 14 09:07:11 crc kubenswrapper[5058]: E1014 09:07:11.316658 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5110179-f50d-468f-a925-adb04c2d62db" containerName="init" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.316670 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5110179-f50d-468f-a925-adb04c2d62db" containerName="init" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.317043 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5110179-f50d-468f-a925-adb04c2d62db" containerName="dnsmasq-dns" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.318164 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.320345 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell3-manage-scripts" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.322689 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell3-manage-config-data" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.328950 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-cell-mapping-5cg8j"] Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.449044 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-config-data\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.449109 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-combined-ca-bundle\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.449151 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-scripts\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.449319 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcsgj\" (UniqueName: \"kubernetes.io/projected/db5b7921-0fe3-4434-b195-269571dcb814-kube-api-access-gcsgj\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.551532 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-config-data\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.551605 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-combined-ca-bundle\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.551653 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-scripts\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.551745 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcsgj\" (UniqueName: \"kubernetes.io/projected/db5b7921-0fe3-4434-b195-269571dcb814-kube-api-access-gcsgj\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.570684 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-combined-ca-bundle\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.572985 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcsgj\" (UniqueName: \"kubernetes.io/projected/db5b7921-0fe3-4434-b195-269571dcb814-kube-api-access-gcsgj\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.573090 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-config-data\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.585155 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.585233 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.590548 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-scripts\") pod \"nova-cell3-cell-mapping-5cg8j\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.609384 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell2-conductor-0" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.631099 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.631399 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 09:07:11 crc kubenswrapper[5058]: I1014 09:07:11.669416 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.175135 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-cell-mapping-5cg8j"] Oct 14 09:07:12 crc kubenswrapper[5058]: W1014 09:07:12.187681 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb5b7921_0fe3_4434_b195_269571dcb814.slice/crio-bea99a7fd571c19207fc7cd253298229b9c8a0e60ff5a9a6e5f12b1fd378e256 WatchSource:0}: Error finding container bea99a7fd571c19207fc7cd253298229b9c8a0e60ff5a9a6e5f12b1fd378e256: Status 404 returned error can't find the container with id bea99a7fd571c19207fc7cd253298229b9c8a0e60ff5a9a6e5f12b1fd378e256 Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.286150 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.391074 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-cell-mapping-5cg8j" event={"ID":"db5b7921-0fe3-4434-b195-269571dcb814","Type":"ContainerStarted","Data":"7ebbc5ba8183ae77a7a1f2b7b4f9743ca1334d9057a56b761032a5ee5fb856eb"} Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.391114 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-cell-mapping-5cg8j" event={"ID":"db5b7921-0fe3-4434-b195-269571dcb814","Type":"ContainerStarted","Data":"bea99a7fd571c19207fc7cd253298229b9c8a0e60ff5a9a6e5f12b1fd378e256"} Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.579034 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell3-cell-mapping-5cg8j" podStartSLOduration=1.5790174829999999 podStartE2EDuration="1.579017483s" podCreationTimestamp="2025-10-14 09:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:12.412807141 +0000 UTC m=+8380.323890947" watchObservedRunningTime="2025-10-14 09:07:12.579017483 +0000 UTC m=+8380.490101289" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.579536 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell2-cell-mapping-prkt6"] Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.580717 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.583211 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-manage-config-data" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.583318 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-manage-scripts" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.592584 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-cell-mapping-prkt6"] Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.665340 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.667047 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.128:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.667077 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.128:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.683871 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvgvd\" (UniqueName: \"kubernetes.io/projected/a624d348-510d-461d-b705-f8eb754d02ba-kube-api-access-bvgvd\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.683914 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-scripts\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.684133 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-config-data\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.684197 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-combined-ca-bundle\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.696988 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.748946 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.129:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.748968 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.129:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.786104 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvgvd\" (UniqueName: \"kubernetes.io/projected/a624d348-510d-461d-b705-f8eb754d02ba-kube-api-access-bvgvd\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.786148 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-scripts\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.786216 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-config-data\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.786236 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-combined-ca-bundle\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.790309 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-combined-ca-bundle\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.790347 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-config-data\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.790642 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-scripts\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.804067 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvgvd\" (UniqueName: \"kubernetes.io/projected/a624d348-510d-461d-b705-f8eb754d02ba-kube-api-access-bvgvd\") pod \"nova-cell2-cell-mapping-prkt6\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:12 crc kubenswrapper[5058]: I1014 09:07:12.944266 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.431179 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.500822 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-cell-mapping-prkt6"] Oct 14 09:07:13 crc kubenswrapper[5058]: W1014 09:07:13.513615 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda624d348_510d_461d_b705_f8eb754d02ba.slice/crio-f8d0a94dadb45a4871bca7d7d184393a375e76feda054bcf0ea0cc58158a15d5 WatchSource:0}: Error finding container f8d0a94dadb45a4871bca7d7d184393a375e76feda054bcf0ea0cc58158a15d5: Status 404 returned error can't find the container with id f8d0a94dadb45a4871bca7d7d184393a375e76feda054bcf0ea0cc58158a15d5 Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.584133 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8zv"] Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.585821 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.588018 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.588230 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.597885 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8zv"] Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.704625 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pjf\" (UniqueName: \"kubernetes.io/projected/9269bf36-7053-4374-8049-ead5e973b407-kube-api-access-h4pjf\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.704668 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.704761 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-config-data\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.704811 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-scripts\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.806246 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pjf\" (UniqueName: \"kubernetes.io/projected/9269bf36-7053-4374-8049-ead5e973b407-kube-api-access-h4pjf\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.806292 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.806384 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-config-data\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.806425 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-scripts\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.810862 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.810984 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-config-data\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.812382 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-scripts\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:13 crc kubenswrapper[5058]: I1014 09:07:13.838588 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pjf\" (UniqueName: \"kubernetes.io/projected/9269bf36-7053-4374-8049-ead5e973b407-kube-api-access-h4pjf\") pod \"nova-cell1-cell-mapping-qn8zv\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:14 crc kubenswrapper[5058]: I1014 09:07:14.000335 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:14 crc kubenswrapper[5058]: I1014 09:07:14.408992 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-cell-mapping-prkt6" event={"ID":"a624d348-510d-461d-b705-f8eb754d02ba","Type":"ContainerStarted","Data":"57666bdc8c09cae7f4312dbe39fe4c843e95425ec81f67fa14dc2ad18296c122"} Oct 14 09:07:14 crc kubenswrapper[5058]: I1014 09:07:14.409035 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-cell-mapping-prkt6" event={"ID":"a624d348-510d-461d-b705-f8eb754d02ba","Type":"ContainerStarted","Data":"f8d0a94dadb45a4871bca7d7d184393a375e76feda054bcf0ea0cc58158a15d5"} Oct 14 09:07:14 crc kubenswrapper[5058]: I1014 09:07:14.428853 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell2-cell-mapping-prkt6" podStartSLOduration=2.428834461 podStartE2EDuration="2.428834461s" podCreationTimestamp="2025-10-14 09:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:14.426579027 +0000 UTC m=+8382.337662843" watchObservedRunningTime="2025-10-14 09:07:14.428834461 +0000 UTC m=+8382.339918257" Oct 14 09:07:14 crc kubenswrapper[5058]: I1014 09:07:14.491169 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8zv"] Oct 14 09:07:14 crc kubenswrapper[5058]: W1014 09:07:14.502037 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9269bf36_7053_4374_8049_ead5e973b407.slice/crio-a6cc02a40d7d3d2f2849486cb6a19d4d27b809291bb92ef770a17b79007440d3 WatchSource:0}: Error finding container a6cc02a40d7d3d2f2849486cb6a19d4d27b809291bb92ef770a17b79007440d3: Status 404 returned error can't find the container with id a6cc02a40d7d3d2f2849486cb6a19d4d27b809291bb92ef770a17b79007440d3 Oct 14 09:07:15 crc kubenswrapper[5058]: I1014 09:07:15.044483 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a080-account-create-6djhc"] Oct 14 09:07:15 crc kubenswrapper[5058]: I1014 09:07:15.051625 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a080-account-create-6djhc"] Oct 14 09:07:15 crc kubenswrapper[5058]: I1014 09:07:15.422033 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn8zv" event={"ID":"9269bf36-7053-4374-8049-ead5e973b407","Type":"ContainerStarted","Data":"32346549baf6e71472566ac65c6453ed6ed142ddcdb6402a2a7a453da6ae9546"} Oct 14 09:07:15 crc kubenswrapper[5058]: I1014 09:07:15.422093 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn8zv" event={"ID":"9269bf36-7053-4374-8049-ead5e973b407","Type":"ContainerStarted","Data":"a6cc02a40d7d3d2f2849486cb6a19d4d27b809291bb92ef770a17b79007440d3"} Oct 14 09:07:15 crc kubenswrapper[5058]: I1014 09:07:15.451405 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qn8zv" podStartSLOduration=2.451389118 podStartE2EDuration="2.451389118s" podCreationTimestamp="2025-10-14 09:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:15.445032296 +0000 UTC m=+8383.356116092" watchObservedRunningTime="2025-10-14 09:07:15.451389118 +0000 UTC m=+8383.362472924" Oct 14 09:07:16 crc kubenswrapper[5058]: I1014 09:07:16.805293 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c7f99b-2aeb-44b2-aa8c-25e171bce768" path="/var/lib/kubelet/pods/f6c7f99b-2aeb-44b2-aa8c-25e171bce768/volumes" Oct 14 09:07:18 crc kubenswrapper[5058]: I1014 09:07:18.479539 5058 generic.go:334] "Generic (PLEG): container finished" podID="db5b7921-0fe3-4434-b195-269571dcb814" containerID="7ebbc5ba8183ae77a7a1f2b7b4f9743ca1334d9057a56b761032a5ee5fb856eb" exitCode=0 Oct 14 09:07:18 crc kubenswrapper[5058]: I1014 09:07:18.479660 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-cell-mapping-5cg8j" event={"ID":"db5b7921-0fe3-4434-b195-269571dcb814","Type":"ContainerDied","Data":"7ebbc5ba8183ae77a7a1f2b7b4f9743ca1334d9057a56b761032a5ee5fb856eb"} Oct 14 09:07:19 crc kubenswrapper[5058]: I1014 09:07:19.498617 5058 generic.go:334] "Generic (PLEG): container finished" podID="a624d348-510d-461d-b705-f8eb754d02ba" containerID="57666bdc8c09cae7f4312dbe39fe4c843e95425ec81f67fa14dc2ad18296c122" exitCode=0 Oct 14 09:07:19 crc kubenswrapper[5058]: I1014 09:07:19.498703 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-cell-mapping-prkt6" event={"ID":"a624d348-510d-461d-b705-f8eb754d02ba","Type":"ContainerDied","Data":"57666bdc8c09cae7f4312dbe39fe4c843e95425ec81f67fa14dc2ad18296c122"} Oct 14 09:07:19 crc kubenswrapper[5058]: I1014 09:07:19.502619 5058 generic.go:334] "Generic (PLEG): container finished" podID="9269bf36-7053-4374-8049-ead5e973b407" containerID="32346549baf6e71472566ac65c6453ed6ed142ddcdb6402a2a7a453da6ae9546" exitCode=0 Oct 14 09:07:19 crc kubenswrapper[5058]: I1014 09:07:19.503063 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn8zv" event={"ID":"9269bf36-7053-4374-8049-ead5e973b407","Type":"ContainerDied","Data":"32346549baf6e71472566ac65c6453ed6ed142ddcdb6402a2a7a453da6ae9546"} Oct 14 09:07:19 crc kubenswrapper[5058]: I1014 09:07:19.899529 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.050636 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcsgj\" (UniqueName: \"kubernetes.io/projected/db5b7921-0fe3-4434-b195-269571dcb814-kube-api-access-gcsgj\") pod \"db5b7921-0fe3-4434-b195-269571dcb814\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.050990 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-config-data\") pod \"db5b7921-0fe3-4434-b195-269571dcb814\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.051068 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-scripts\") pod \"db5b7921-0fe3-4434-b195-269571dcb814\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.051127 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-combined-ca-bundle\") pod \"db5b7921-0fe3-4434-b195-269571dcb814\" (UID: \"db5b7921-0fe3-4434-b195-269571dcb814\") " Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.058307 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5b7921-0fe3-4434-b195-269571dcb814-kube-api-access-gcsgj" (OuterVolumeSpecName: "kube-api-access-gcsgj") pod "db5b7921-0fe3-4434-b195-269571dcb814" (UID: "db5b7921-0fe3-4434-b195-269571dcb814"). InnerVolumeSpecName "kube-api-access-gcsgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.061083 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-scripts" (OuterVolumeSpecName: "scripts") pod "db5b7921-0fe3-4434-b195-269571dcb814" (UID: "db5b7921-0fe3-4434-b195-269571dcb814"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.088529 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db5b7921-0fe3-4434-b195-269571dcb814" (UID: "db5b7921-0fe3-4434-b195-269571dcb814"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.097999 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-config-data" (OuterVolumeSpecName: "config-data") pod "db5b7921-0fe3-4434-b195-269571dcb814" (UID: "db5b7921-0fe3-4434-b195-269571dcb814"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.154507 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.154633 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.154657 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5b7921-0fe3-4434-b195-269571dcb814-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.154687 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcsgj\" (UniqueName: \"kubernetes.io/projected/db5b7921-0fe3-4434-b195-269571dcb814-kube-api-access-gcsgj\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.520758 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-cell-mapping-5cg8j" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.522409 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-cell-mapping-5cg8j" event={"ID":"db5b7921-0fe3-4434-b195-269571dcb814","Type":"ContainerDied","Data":"bea99a7fd571c19207fc7cd253298229b9c8a0e60ff5a9a6e5f12b1fd378e256"} Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.522465 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bea99a7fd571c19207fc7cd253298229b9c8a0e60ff5a9a6e5f12b1fd378e256" Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.698881 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.699227 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-api" containerID="cri-o://eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a" gracePeriod=30 Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.699371 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-log" containerID="cri-o://c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18" gracePeriod=30 Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.710187 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.710401 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="970c4584-8ef4-44d1-8fd3-ff415a3987a1" containerName="nova-scheduler-scheduler" containerID="cri-o://2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65" gracePeriod=30 Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.737820 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.738038 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-log" containerID="cri-o://cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6" gracePeriod=30 Oct 14 09:07:20 crc kubenswrapper[5058]: I1014 09:07:20.738212 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-metadata" containerID="cri-o://d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5" gracePeriod=30 Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.004867 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.010926 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.180865 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pjf\" (UniqueName: \"kubernetes.io/projected/9269bf36-7053-4374-8049-ead5e973b407-kube-api-access-h4pjf\") pod \"9269bf36-7053-4374-8049-ead5e973b407\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.180973 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-combined-ca-bundle\") pod \"9269bf36-7053-4374-8049-ead5e973b407\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.181008 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-combined-ca-bundle\") pod \"a624d348-510d-461d-b705-f8eb754d02ba\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.181056 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvgvd\" (UniqueName: \"kubernetes.io/projected/a624d348-510d-461d-b705-f8eb754d02ba-kube-api-access-bvgvd\") pod \"a624d348-510d-461d-b705-f8eb754d02ba\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.181098 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-config-data\") pod \"9269bf36-7053-4374-8049-ead5e973b407\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.182057 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-scripts\") pod \"9269bf36-7053-4374-8049-ead5e973b407\" (UID: \"9269bf36-7053-4374-8049-ead5e973b407\") " Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.182154 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-config-data\") pod \"a624d348-510d-461d-b705-f8eb754d02ba\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.182179 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-scripts\") pod \"a624d348-510d-461d-b705-f8eb754d02ba\" (UID: \"a624d348-510d-461d-b705-f8eb754d02ba\") " Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.185884 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a624d348-510d-461d-b705-f8eb754d02ba-kube-api-access-bvgvd" (OuterVolumeSpecName: "kube-api-access-bvgvd") pod "a624d348-510d-461d-b705-f8eb754d02ba" (UID: "a624d348-510d-461d-b705-f8eb754d02ba"). InnerVolumeSpecName "kube-api-access-bvgvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.186236 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-scripts" (OuterVolumeSpecName: "scripts") pod "a624d348-510d-461d-b705-f8eb754d02ba" (UID: "a624d348-510d-461d-b705-f8eb754d02ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.187215 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-scripts" (OuterVolumeSpecName: "scripts") pod "9269bf36-7053-4374-8049-ead5e973b407" (UID: "9269bf36-7053-4374-8049-ead5e973b407"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.202684 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9269bf36-7053-4374-8049-ead5e973b407-kube-api-access-h4pjf" (OuterVolumeSpecName: "kube-api-access-h4pjf") pod "9269bf36-7053-4374-8049-ead5e973b407" (UID: "9269bf36-7053-4374-8049-ead5e973b407"). InnerVolumeSpecName "kube-api-access-h4pjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.209107 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9269bf36-7053-4374-8049-ead5e973b407" (UID: "9269bf36-7053-4374-8049-ead5e973b407"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.219608 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a624d348-510d-461d-b705-f8eb754d02ba" (UID: "a624d348-510d-461d-b705-f8eb754d02ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.222426 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-config-data" (OuterVolumeSpecName: "config-data") pod "a624d348-510d-461d-b705-f8eb754d02ba" (UID: "a624d348-510d-461d-b705-f8eb754d02ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.227427 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-config-data" (OuterVolumeSpecName: "config-data") pod "9269bf36-7053-4374-8049-ead5e973b407" (UID: "9269bf36-7053-4374-8049-ead5e973b407"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.284438 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.284472 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.284482 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4pjf\" (UniqueName: \"kubernetes.io/projected/9269bf36-7053-4374-8049-ead5e973b407-kube-api-access-h4pjf\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.284492 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.284500 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a624d348-510d-461d-b705-f8eb754d02ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.284508 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvgvd\" (UniqueName: \"kubernetes.io/projected/a624d348-510d-461d-b705-f8eb754d02ba-kube-api-access-bvgvd\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.284564 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.284575 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9269bf36-7053-4374-8049-ead5e973b407-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.551301 5058 generic.go:334] "Generic (PLEG): container finished" podID="af479b63-5f6c-43b3-9597-c897502d4457" containerID="cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6" exitCode=143 Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.551408 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af479b63-5f6c-43b3-9597-c897502d4457","Type":"ContainerDied","Data":"cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6"} Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.553534 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-cell-mapping-prkt6" event={"ID":"a624d348-510d-461d-b705-f8eb754d02ba","Type":"ContainerDied","Data":"f8d0a94dadb45a4871bca7d7d184393a375e76feda054bcf0ea0cc58158a15d5"} Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.553578 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8d0a94dadb45a4871bca7d7d184393a375e76feda054bcf0ea0cc58158a15d5" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.553659 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-cell-mapping-prkt6" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.572222 5058 generic.go:334] "Generic (PLEG): container finished" podID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerID="c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18" exitCode=143 Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.572284 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0f08db2-363f-4b5f-be21-65f911ae2fba","Type":"ContainerDied","Data":"c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18"} Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.575070 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qn8zv" event={"ID":"9269bf36-7053-4374-8049-ead5e973b407","Type":"ContainerDied","Data":"a6cc02a40d7d3d2f2849486cb6a19d4d27b809291bb92ef770a17b79007440d3"} Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.575094 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6cc02a40d7d3d2f2849486cb6a19d4d27b809291bb92ef770a17b79007440d3" Oct 14 09:07:21 crc kubenswrapper[5058]: I1014 09:07:21.575175 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qn8zv" Oct 14 09:07:22 crc kubenswrapper[5058]: E1014 09:07:22.666922 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 09:07:22 crc kubenswrapper[5058]: E1014 09:07:22.670335 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 09:07:22 crc kubenswrapper[5058]: E1014 09:07:22.672905 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 09:07:22 crc kubenswrapper[5058]: E1014 09:07:22.673034 5058 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="970c4584-8ef4-44d1-8fd3-ff415a3987a1" containerName="nova-scheduler-scheduler" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.490188 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.495828 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.601863 5058 generic.go:334] "Generic (PLEG): container finished" podID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerID="eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a" exitCode=0 Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.601932 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0f08db2-363f-4b5f-be21-65f911ae2fba","Type":"ContainerDied","Data":"eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a"} Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.602176 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0f08db2-363f-4b5f-be21-65f911ae2fba","Type":"ContainerDied","Data":"1035da48744259d66c3555bfe52bc3f413d84ef3df919e86bca6d41e79780167"} Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.602201 5058 scope.go:117] "RemoveContainer" containerID="eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.601951 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.604739 5058 generic.go:334] "Generic (PLEG): container finished" podID="af479b63-5f6c-43b3-9597-c897502d4457" containerID="d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5" exitCode=0 Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.604769 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af479b63-5f6c-43b3-9597-c897502d4457","Type":"ContainerDied","Data":"d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5"} Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.604830 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.604791 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af479b63-5f6c-43b3-9597-c897502d4457","Type":"ContainerDied","Data":"60452b04bd6ebc3a2cdde9b407557bd83da0b188371419308e6a9b769e0c7cd6"} Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.643575 5058 scope.go:117] "RemoveContainer" containerID="c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.649516 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af479b63-5f6c-43b3-9597-c897502d4457-logs\") pod \"af479b63-5f6c-43b3-9597-c897502d4457\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.649605 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-config-data\") pod \"af479b63-5f6c-43b3-9597-c897502d4457\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.649647 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-config-data\") pod \"b0f08db2-363f-4b5f-be21-65f911ae2fba\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.649718 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zlg7\" (UniqueName: \"kubernetes.io/projected/af479b63-5f6c-43b3-9597-c897502d4457-kube-api-access-6zlg7\") pod \"af479b63-5f6c-43b3-9597-c897502d4457\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.649864 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f08db2-363f-4b5f-be21-65f911ae2fba-logs\") pod \"b0f08db2-363f-4b5f-be21-65f911ae2fba\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.649933 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slv8v\" (UniqueName: \"kubernetes.io/projected/b0f08db2-363f-4b5f-be21-65f911ae2fba-kube-api-access-slv8v\") pod \"b0f08db2-363f-4b5f-be21-65f911ae2fba\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.650074 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-combined-ca-bundle\") pod \"b0f08db2-363f-4b5f-be21-65f911ae2fba\" (UID: \"b0f08db2-363f-4b5f-be21-65f911ae2fba\") " Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.650108 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-combined-ca-bundle\") pod \"af479b63-5f6c-43b3-9597-c897502d4457\" (UID: \"af479b63-5f6c-43b3-9597-c897502d4457\") " Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.650369 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af479b63-5f6c-43b3-9597-c897502d4457-logs" (OuterVolumeSpecName: "logs") pod "af479b63-5f6c-43b3-9597-c897502d4457" (UID: "af479b63-5f6c-43b3-9597-c897502d4457"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.651437 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0f08db2-363f-4b5f-be21-65f911ae2fba-logs" (OuterVolumeSpecName: "logs") pod "b0f08db2-363f-4b5f-be21-65f911ae2fba" (UID: "b0f08db2-363f-4b5f-be21-65f911ae2fba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.655267 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af479b63-5f6c-43b3-9597-c897502d4457-kube-api-access-6zlg7" (OuterVolumeSpecName: "kube-api-access-6zlg7") pod "af479b63-5f6c-43b3-9597-c897502d4457" (UID: "af479b63-5f6c-43b3-9597-c897502d4457"). InnerVolumeSpecName "kube-api-access-6zlg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.656681 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f08db2-363f-4b5f-be21-65f911ae2fba-kube-api-access-slv8v" (OuterVolumeSpecName: "kube-api-access-slv8v") pod "b0f08db2-363f-4b5f-be21-65f911ae2fba" (UID: "b0f08db2-363f-4b5f-be21-65f911ae2fba"). InnerVolumeSpecName "kube-api-access-slv8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.670473 5058 scope.go:117] "RemoveContainer" containerID="eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a" Oct 14 09:07:24 crc kubenswrapper[5058]: E1014 09:07:24.670969 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a\": container with ID starting with eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a not found: ID does not exist" containerID="eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.670996 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a"} err="failed to get container status \"eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a\": rpc error: code = NotFound desc = could not find container \"eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a\": container with ID starting with eaf89fa5503eb6f999c02ab4524731c48936e045d291821e62396ed8be389d0a not found: ID does not exist" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.671016 5058 scope.go:117] "RemoveContainer" containerID="c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18" Oct 14 09:07:24 crc kubenswrapper[5058]: E1014 09:07:24.671275 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18\": container with ID starting with c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18 not found: ID does not exist" containerID="c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.671293 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18"} err="failed to get container status \"c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18\": rpc error: code = NotFound desc = could not find container \"c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18\": container with ID starting with c07e7f43e325daca76ba7f0eba77574c2c2d0807bf70cdbe0f5c78b74cb40e18 not found: ID does not exist" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.671306 5058 scope.go:117] "RemoveContainer" containerID="d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.676586 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-config-data" (OuterVolumeSpecName: "config-data") pod "b0f08db2-363f-4b5f-be21-65f911ae2fba" (UID: "b0f08db2-363f-4b5f-be21-65f911ae2fba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.679349 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af479b63-5f6c-43b3-9597-c897502d4457" (UID: "af479b63-5f6c-43b3-9597-c897502d4457"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.681298 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0f08db2-363f-4b5f-be21-65f911ae2fba" (UID: "b0f08db2-363f-4b5f-be21-65f911ae2fba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.684714 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-config-data" (OuterVolumeSpecName: "config-data") pod "af479b63-5f6c-43b3-9597-c897502d4457" (UID: "af479b63-5f6c-43b3-9597-c897502d4457"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.699615 5058 scope.go:117] "RemoveContainer" containerID="cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.723432 5058 scope.go:117] "RemoveContainer" containerID="d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5" Oct 14 09:07:24 crc kubenswrapper[5058]: E1014 09:07:24.723808 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5\": container with ID starting with d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5 not found: ID does not exist" containerID="d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.723854 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5"} err="failed to get container status \"d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5\": rpc error: code = NotFound desc = could not find container \"d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5\": container with ID starting with d7fc3eb2fd92a1fc87adc2c08555e05b625f2ad8c1b3f0917f339bfd4136b8b5 not found: ID does not exist" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.723880 5058 scope.go:117] "RemoveContainer" containerID="cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6" Oct 14 09:07:24 crc kubenswrapper[5058]: E1014 09:07:24.727328 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6\": container with ID starting with cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6 not found: ID does not exist" containerID="cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.727424 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6"} err="failed to get container status \"cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6\": rpc error: code = NotFound desc = could not find container \"cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6\": container with ID starting with cf379c51a493707f8b4b9859acfced560e4567dbafaef0fd2ae3e05353c4aef6 not found: ID does not exist" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.752193 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af479b63-5f6c-43b3-9597-c897502d4457-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.752225 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.752240 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.752253 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zlg7\" (UniqueName: \"kubernetes.io/projected/af479b63-5f6c-43b3-9597-c897502d4457-kube-api-access-6zlg7\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.752266 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f08db2-363f-4b5f-be21-65f911ae2fba-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.752278 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slv8v\" (UniqueName: \"kubernetes.io/projected/b0f08db2-363f-4b5f-be21-65f911ae2fba-kube-api-access-slv8v\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.752289 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f08db2-363f-4b5f-be21-65f911ae2fba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.752301 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af479b63-5f6c-43b3-9597-c897502d4457-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.886507 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.965243 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.982656 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:24 crc kubenswrapper[5058]: I1014 09:07:24.998980 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.022674 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.033286 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: E1014 09:07:25.033759 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-log" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.033787 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-log" Oct 14 09:07:25 crc kubenswrapper[5058]: E1014 09:07:25.033817 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a624d348-510d-461d-b705-f8eb754d02ba" containerName="nova-manage" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.033824 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a624d348-510d-461d-b705-f8eb754d02ba" containerName="nova-manage" Oct 14 09:07:25 crc kubenswrapper[5058]: E1014 09:07:25.033832 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-api" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.033840 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-api" Oct 14 09:07:25 crc kubenswrapper[5058]: E1014 09:07:25.033860 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5b7921-0fe3-4434-b195-269571dcb814" containerName="nova-manage" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.033866 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5b7921-0fe3-4434-b195-269571dcb814" containerName="nova-manage" Oct 14 09:07:25 crc kubenswrapper[5058]: E1014 09:07:25.033883 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970c4584-8ef4-44d1-8fd3-ff415a3987a1" containerName="nova-scheduler-scheduler" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.033889 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="970c4584-8ef4-44d1-8fd3-ff415a3987a1" containerName="nova-scheduler-scheduler" Oct 14 09:07:25 crc kubenswrapper[5058]: E1014 09:07:25.033902 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9269bf36-7053-4374-8049-ead5e973b407" containerName="nova-manage" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.033908 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9269bf36-7053-4374-8049-ead5e973b407" containerName="nova-manage" Oct 14 09:07:25 crc kubenswrapper[5058]: E1014 09:07:25.033928 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-metadata" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.033934 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-metadata" Oct 14 09:07:25 crc kubenswrapper[5058]: E1014 09:07:25.033945 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-log" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.033952 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-log" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.034134 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-api" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.034154 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" containerName="nova-api-log" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.034167 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-metadata" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.034176 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="af479b63-5f6c-43b3-9597-c897502d4457" containerName="nova-metadata-log" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.034187 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9269bf36-7053-4374-8049-ead5e973b407" containerName="nova-manage" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.034197 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5b7921-0fe3-4434-b195-269571dcb814" containerName="nova-manage" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.034205 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a624d348-510d-461d-b705-f8eb754d02ba" containerName="nova-manage" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.034223 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="970c4584-8ef4-44d1-8fd3-ff415a3987a1" containerName="nova-scheduler-scheduler" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.035225 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.037714 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.041840 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.044521 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.047506 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.050279 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.057564 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-combined-ca-bundle\") pod \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.057671 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxpkb\" (UniqueName: \"kubernetes.io/projected/970c4584-8ef4-44d1-8fd3-ff415a3987a1-kube-api-access-sxpkb\") pod \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.057823 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-config-data\") pod \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\" (UID: \"970c4584-8ef4-44d1-8fd3-ff415a3987a1\") " Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.058934 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.063576 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970c4584-8ef4-44d1-8fd3-ff415a3987a1-kube-api-access-sxpkb" (OuterVolumeSpecName: "kube-api-access-sxpkb") pod "970c4584-8ef4-44d1-8fd3-ff415a3987a1" (UID: "970c4584-8ef4-44d1-8fd3-ff415a3987a1"). InnerVolumeSpecName "kube-api-access-sxpkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.087944 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-config-data" (OuterVolumeSpecName: "config-data") pod "970c4584-8ef4-44d1-8fd3-ff415a3987a1" (UID: "970c4584-8ef4-44d1-8fd3-ff415a3987a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.092457 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "970c4584-8ef4-44d1-8fd3-ff415a3987a1" (UID: "970c4584-8ef4-44d1-8fd3-ff415a3987a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.159601 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/94a23eed-d2e1-422f-acbc-40af4cab721c-kube-api-access-m588r\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.159715 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.159752 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9b7m\" (UniqueName: \"kubernetes.io/projected/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-kube-api-access-p9b7m\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.159771 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.159789 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-config-data\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.159882 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-config-data\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.159901 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a23eed-d2e1-422f-acbc-40af4cab721c-logs\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.159917 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-logs\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.160015 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.160026 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970c4584-8ef4-44d1-8fd3-ff415a3987a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.160038 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxpkb\" (UniqueName: \"kubernetes.io/projected/970c4584-8ef4-44d1-8fd3-ff415a3987a1-kube-api-access-sxpkb\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.261962 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.262026 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9b7m\" (UniqueName: \"kubernetes.io/projected/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-kube-api-access-p9b7m\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.262048 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.262069 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-config-data\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.262147 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-config-data\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.262165 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a23eed-d2e1-422f-acbc-40af4cab721c-logs\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.262183 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-logs\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.262225 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/94a23eed-d2e1-422f-acbc-40af4cab721c-kube-api-access-m588r\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.263173 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-logs\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.265923 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a23eed-d2e1-422f-acbc-40af4cab721c-logs\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.266926 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-config-data\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.267036 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-config-data\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.267520 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.269158 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.287441 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/94a23eed-d2e1-422f-acbc-40af4cab721c-kube-api-access-m588r\") pod \"nova-metadata-0\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.290591 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9b7m\" (UniqueName: \"kubernetes.io/projected/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-kube-api-access-p9b7m\") pod \"nova-api-0\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.363102 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.446630 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.617765 5058 generic.go:334] "Generic (PLEG): container finished" podID="970c4584-8ef4-44d1-8fd3-ff415a3987a1" containerID="2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65" exitCode=0 Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.617830 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.617834 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"970c4584-8ef4-44d1-8fd3-ff415a3987a1","Type":"ContainerDied","Data":"2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65"} Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.617867 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"970c4584-8ef4-44d1-8fd3-ff415a3987a1","Type":"ContainerDied","Data":"e4ba41021f6f1a7b490e9560389dca32c33a4c222fdf84bedaa152b25a28ff8a"} Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.617905 5058 scope.go:117] "RemoveContainer" containerID="2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.676155 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.677685 5058 scope.go:117] "RemoveContainer" containerID="2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65" Oct 14 09:07:25 crc kubenswrapper[5058]: E1014 09:07:25.678206 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65\": container with ID starting with 2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65 not found: ID does not exist" containerID="2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.678237 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65"} err="failed to get container status \"2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65\": rpc error: code = NotFound desc = could not find container \"2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65\": container with ID starting with 2d72600c51ac595e025cee070643c2107f19b8c09dba8f64c8c234fac9737b65 not found: ID does not exist" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.683574 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.690915 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.692244 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.694341 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.730332 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.773433 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ssl\" (UniqueName: \"kubernetes.io/projected/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-kube-api-access-59ssl\") pod \"nova-scheduler-0\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.773485 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-config-data\") pod \"nova-scheduler-0\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.773598 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.875597 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.875680 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59ssl\" (UniqueName: \"kubernetes.io/projected/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-kube-api-access-59ssl\") pod \"nova-scheduler-0\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.875720 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-config-data\") pod \"nova-scheduler-0\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.881511 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-config-data\") pod \"nova-scheduler-0\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.882024 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.900027 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ssl\" (UniqueName: \"kubernetes.io/projected/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-kube-api-access-59ssl\") pod \"nova-scheduler-0\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " pod="openstack/nova-scheduler-0" Oct 14 09:07:25 crc kubenswrapper[5058]: I1014 09:07:25.959097 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:07:25 crc kubenswrapper[5058]: W1014 09:07:25.967657 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94a23eed_d2e1_422f_acbc_40af4cab721c.slice/crio-d28e9bb139b0b6835a79d92b63a2ade7bf931ea4fd887bf0dbe0f640e8ac0334 WatchSource:0}: Error finding container d28e9bb139b0b6835a79d92b63a2ade7bf931ea4fd887bf0dbe0f640e8ac0334: Status 404 returned error can't find the container with id d28e9bb139b0b6835a79d92b63a2ade7bf931ea4fd887bf0dbe0f640e8ac0334 Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.013899 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.034443 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:07:26 crc kubenswrapper[5058]: W1014 09:07:26.054464 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85e8ec3d_3c02_4720_a15e_ca297fa8a6b7.slice/crio-5cde77956f03e26f2102aa475390075a6ba6294ecbcd5825a67c020df27673d6 WatchSource:0}: Error finding container 5cde77956f03e26f2102aa475390075a6ba6294ecbcd5825a67c020df27673d6: Status 404 returned error can't find the container with id 5cde77956f03e26f2102aa475390075a6ba6294ecbcd5825a67c020df27673d6 Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.526922 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.643820 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b60f1acf-e39d-4169-ab16-0c77fa2d92ef","Type":"ContainerStarted","Data":"cd725518d948fa42762a8d46482442d274e8e384416dbdaee4d785cfbaa9afe9"} Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.647160 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7","Type":"ContainerStarted","Data":"256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7"} Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.647203 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7","Type":"ContainerStarted","Data":"fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696"} Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.647214 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7","Type":"ContainerStarted","Data":"5cde77956f03e26f2102aa475390075a6ba6294ecbcd5825a67c020df27673d6"} Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.650831 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94a23eed-d2e1-422f-acbc-40af4cab721c","Type":"ContainerStarted","Data":"19b2175ddcb06631577fe677e30dab23bd6bfde38844f7815ad521860f9ec564"} Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.650870 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94a23eed-d2e1-422f-acbc-40af4cab721c","Type":"ContainerStarted","Data":"1b7cabe79a9fb0aa7148bbfcda03eb7a9f05811c4c6ff8244605bd4bac2b1a8b"} Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.650887 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94a23eed-d2e1-422f-acbc-40af4cab721c","Type":"ContainerStarted","Data":"d28e9bb139b0b6835a79d92b63a2ade7bf931ea4fd887bf0dbe0f640e8ac0334"} Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.682140 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.682125011 podStartE2EDuration="2.682125011s" podCreationTimestamp="2025-10-14 09:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:26.677733566 +0000 UTC m=+8394.588817372" watchObservedRunningTime="2025-10-14 09:07:26.682125011 +0000 UTC m=+8394.593208807" Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.703744 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.703728959 podStartE2EDuration="2.703728959s" podCreationTimestamp="2025-10-14 09:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:26.698716316 +0000 UTC m=+8394.609800122" watchObservedRunningTime="2025-10-14 09:07:26.703728959 +0000 UTC m=+8394.614812765" Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.805651 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970c4584-8ef4-44d1-8fd3-ff415a3987a1" path="/var/lib/kubelet/pods/970c4584-8ef4-44d1-8fd3-ff415a3987a1/volumes" Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.806190 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af479b63-5f6c-43b3-9597-c897502d4457" path="/var/lib/kubelet/pods/af479b63-5f6c-43b3-9597-c897502d4457/volumes" Oct 14 09:07:26 crc kubenswrapper[5058]: I1014 09:07:26.806750 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0f08db2-363f-4b5f-be21-65f911ae2fba" path="/var/lib/kubelet/pods/b0f08db2-363f-4b5f-be21-65f911ae2fba/volumes" Oct 14 09:07:27 crc kubenswrapper[5058]: I1014 09:07:27.048548 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-jmlj2"] Oct 14 09:07:27 crc kubenswrapper[5058]: I1014 09:07:27.058389 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-jmlj2"] Oct 14 09:07:27 crc kubenswrapper[5058]: I1014 09:07:27.681309 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b60f1acf-e39d-4169-ab16-0c77fa2d92ef","Type":"ContainerStarted","Data":"3624b3b60feee0d575d3fe2990941128a77701a291e7296e3f6b3b67a7283690"} Oct 14 09:07:27 crc kubenswrapper[5058]: I1014 09:07:27.704426 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.70440601 podStartE2EDuration="2.70440601s" podCreationTimestamp="2025-10-14 09:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:27.697447821 +0000 UTC m=+8395.608531637" watchObservedRunningTime="2025-10-14 09:07:27.70440601 +0000 UTC m=+8395.615489836" Oct 14 09:07:28 crc kubenswrapper[5058]: I1014 09:07:28.813864 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfdf27fe-36f5-4e62-9dcd-00f092497365" path="/var/lib/kubelet/pods/cfdf27fe-36f5-4e62-9dcd-00f092497365/volumes" Oct 14 09:07:30 crc kubenswrapper[5058]: I1014 09:07:30.363626 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 09:07:30 crc kubenswrapper[5058]: I1014 09:07:30.364124 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 09:07:31 crc kubenswrapper[5058]: I1014 09:07:31.015133 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 09:07:35 crc kubenswrapper[5058]: I1014 09:07:35.363979 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 09:07:35 crc kubenswrapper[5058]: I1014 09:07:35.364746 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 09:07:35 crc kubenswrapper[5058]: I1014 09:07:35.448692 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 09:07:35 crc kubenswrapper[5058]: I1014 09:07:35.449273 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 09:07:36 crc kubenswrapper[5058]: I1014 09:07:36.014342 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 09:07:36 crc kubenswrapper[5058]: I1014 09:07:36.048653 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 09:07:36 crc kubenswrapper[5058]: I1014 09:07:36.447284 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.135:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:07:36 crc kubenswrapper[5058]: I1014 09:07:36.447318 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.135:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:07:36 crc kubenswrapper[5058]: I1014 09:07:36.531054 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:07:36 crc kubenswrapper[5058]: I1014 09:07:36.531995 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:07:36 crc kubenswrapper[5058]: I1014 09:07:36.826779 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 09:07:40 crc kubenswrapper[5058]: I1014 09:07:40.052181 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k5mpt"] Oct 14 09:07:40 crc kubenswrapper[5058]: I1014 09:07:40.067234 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k5mpt"] Oct 14 09:07:40 crc kubenswrapper[5058]: I1014 09:07:40.815111 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60672787-5a75-4785-b50b-853a1ad16180" path="/var/lib/kubelet/pods/60672787-5a75-4785-b50b-853a1ad16180/volumes" Oct 14 09:07:42 crc kubenswrapper[5058]: I1014 09:07:42.939564 5058 scope.go:117] "RemoveContainer" containerID="f523f938c96cf5d2dd5233b357f3adf8b207dec646add18faffbaaddf523a4fa" Oct 14 09:07:42 crc kubenswrapper[5058]: I1014 09:07:42.986770 5058 scope.go:117] "RemoveContainer" containerID="ee38261062e87ceb3013524f80a5fdd142461e139ca0ea3c6ba7c88b9478b48d" Oct 14 09:07:43 crc kubenswrapper[5058]: I1014 09:07:43.075655 5058 scope.go:117] "RemoveContainer" containerID="9a1d7df0dc9a838ff712850419c9fec6ff0ae3e628b470e338345c6818d07c26" Oct 14 09:07:43 crc kubenswrapper[5058]: I1014 09:07:43.106150 5058 scope.go:117] "RemoveContainer" containerID="d029ab76c501801cac65070088bf6bcc40cbcbb1d1bb6886707a0643897e3da5" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.367715 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.369459 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.372515 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.455009 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.456688 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.456743 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.461965 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.892826 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.895514 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 09:07:45 crc kubenswrapper[5058]: I1014 09:07:45.897429 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.109233 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d859d846c-b7t7c"] Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.110856 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.119282 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d859d846c-b7t7c"] Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.153238 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-config\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.153307 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-dns-svc\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.153403 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-nb\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.153443 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9wn\" (UniqueName: \"kubernetes.io/projected/2ee4bc78-886a-4387-b827-5a1822a278cc-kube-api-access-bk9wn\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.153469 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-sb\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.255007 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-config\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.255068 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-dns-svc\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.255153 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-nb\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.255186 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9wn\" (UniqueName: \"kubernetes.io/projected/2ee4bc78-886a-4387-b827-5a1822a278cc-kube-api-access-bk9wn\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.255206 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-sb\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.255979 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-dns-svc\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.256014 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-config\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.257716 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-nb\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.258358 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-sb\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.273729 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9wn\" (UniqueName: \"kubernetes.io/projected/2ee4bc78-886a-4387-b827-5a1822a278cc-kube-api-access-bk9wn\") pod \"dnsmasq-dns-7d859d846c-b7t7c\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.443105 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:46 crc kubenswrapper[5058]: I1014 09:07:46.915558 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d859d846c-b7t7c"] Oct 14 09:07:47 crc kubenswrapper[5058]: I1014 09:07:47.922310 5058 generic.go:334] "Generic (PLEG): container finished" podID="2ee4bc78-886a-4387-b827-5a1822a278cc" containerID="29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f" exitCode=0 Oct 14 09:07:47 crc kubenswrapper[5058]: I1014 09:07:47.922385 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" event={"ID":"2ee4bc78-886a-4387-b827-5a1822a278cc","Type":"ContainerDied","Data":"29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f"} Oct 14 09:07:47 crc kubenswrapper[5058]: I1014 09:07:47.926302 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" event={"ID":"2ee4bc78-886a-4387-b827-5a1822a278cc","Type":"ContainerStarted","Data":"93c45a3c4c85c0fae21499a915d42a313b7443f39f81d2349297c567fcf72f7c"} Oct 14 09:07:48 crc kubenswrapper[5058]: I1014 09:07:48.936567 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" event={"ID":"2ee4bc78-886a-4387-b827-5a1822a278cc","Type":"ContainerStarted","Data":"8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c"} Oct 14 09:07:48 crc kubenswrapper[5058]: I1014 09:07:48.937065 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:48 crc kubenswrapper[5058]: I1014 09:07:48.956565 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" podStartSLOduration=2.956540711 podStartE2EDuration="2.956540711s" podCreationTimestamp="2025-10-14 09:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:07:48.953441592 +0000 UTC m=+8416.864525418" watchObservedRunningTime="2025-10-14 09:07:48.956540711 +0000 UTC m=+8416.867624517" Oct 14 09:07:56 crc kubenswrapper[5058]: I1014 09:07:56.444997 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:07:56 crc kubenswrapper[5058]: I1014 09:07:56.516459 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c68978599-5w4l9"] Oct 14 09:07:56 crc kubenswrapper[5058]: I1014 09:07:56.516731 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" podUID="deaaf634-252e-4273-be94-541559e377f7" containerName="dnsmasq-dns" containerID="cri-o://259a8210a3ea80b0d226e3b994fafb64254fe2072b053941265ece8c91cd46a3" gracePeriod=10 Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.073579 5058 generic.go:334] "Generic (PLEG): container finished" podID="deaaf634-252e-4273-be94-541559e377f7" containerID="259a8210a3ea80b0d226e3b994fafb64254fe2072b053941265ece8c91cd46a3" exitCode=0 Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.073614 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" event={"ID":"deaaf634-252e-4273-be94-541559e377f7","Type":"ContainerDied","Data":"259a8210a3ea80b0d226e3b994fafb64254fe2072b053941265ece8c91cd46a3"} Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.073874 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" event={"ID":"deaaf634-252e-4273-be94-541559e377f7","Type":"ContainerDied","Data":"3ee08ed3a44b19c2de13b49ba56b6084bc1b6acea879e54f6cf977f0e2017c9b"} Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.073890 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ee08ed3a44b19c2de13b49ba56b6084bc1b6acea879e54f6cf977f0e2017c9b" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.086809 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.204170 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-nb\") pod \"deaaf634-252e-4273-be94-541559e377f7\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.204218 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-config\") pod \"deaaf634-252e-4273-be94-541559e377f7\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.204251 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-dns-svc\") pod \"deaaf634-252e-4273-be94-541559e377f7\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.204394 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcbvr\" (UniqueName: \"kubernetes.io/projected/deaaf634-252e-4273-be94-541559e377f7-kube-api-access-tcbvr\") pod \"deaaf634-252e-4273-be94-541559e377f7\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.204439 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-sb\") pod \"deaaf634-252e-4273-be94-541559e377f7\" (UID: \"deaaf634-252e-4273-be94-541559e377f7\") " Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.210119 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaaf634-252e-4273-be94-541559e377f7-kube-api-access-tcbvr" (OuterVolumeSpecName: "kube-api-access-tcbvr") pod "deaaf634-252e-4273-be94-541559e377f7" (UID: "deaaf634-252e-4273-be94-541559e377f7"). InnerVolumeSpecName "kube-api-access-tcbvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.263184 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "deaaf634-252e-4273-be94-541559e377f7" (UID: "deaaf634-252e-4273-be94-541559e377f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.266140 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "deaaf634-252e-4273-be94-541559e377f7" (UID: "deaaf634-252e-4273-be94-541559e377f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.270698 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-config" (OuterVolumeSpecName: "config") pod "deaaf634-252e-4273-be94-541559e377f7" (UID: "deaaf634-252e-4273-be94-541559e377f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.285391 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "deaaf634-252e-4273-be94-541559e377f7" (UID: "deaaf634-252e-4273-be94-541559e377f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.306543 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.306574 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.306583 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.306591 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deaaf634-252e-4273-be94-541559e377f7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:57 crc kubenswrapper[5058]: I1014 09:07:57.306601 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcbvr\" (UniqueName: \"kubernetes.io/projected/deaaf634-252e-4273-be94-541559e377f7-kube-api-access-tcbvr\") on node \"crc\" DevicePath \"\"" Oct 14 09:07:58 crc kubenswrapper[5058]: I1014 09:07:58.081667 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" Oct 14 09:07:58 crc kubenswrapper[5058]: I1014 09:07:58.116599 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c68978599-5w4l9"] Oct 14 09:07:58 crc kubenswrapper[5058]: I1014 09:07:58.124397 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c68978599-5w4l9"] Oct 14 09:07:58 crc kubenswrapper[5058]: I1014 09:07:58.814006 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deaaf634-252e-4273-be94-541559e377f7" path="/var/lib/kubelet/pods/deaaf634-252e-4273-be94-541559e377f7/volumes" Oct 14 09:08:01 crc kubenswrapper[5058]: I1014 09:08:01.944621 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c68978599-5w4l9" podUID="deaaf634-252e-4273-be94-541559e377f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.121:5353: i/o timeout" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.153321 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b64d69759-5xv4k"] Oct 14 09:08:06 crc kubenswrapper[5058]: E1014 09:08:06.154416 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deaaf634-252e-4273-be94-541559e377f7" containerName="dnsmasq-dns" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.154434 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaaf634-252e-4273-be94-541559e377f7" containerName="dnsmasq-dns" Oct 14 09:08:06 crc kubenswrapper[5058]: E1014 09:08:06.154477 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deaaf634-252e-4273-be94-541559e377f7" containerName="init" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.154485 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaaf634-252e-4273-be94-541559e377f7" containerName="init" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.154703 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="deaaf634-252e-4273-be94-541559e377f7" containerName="dnsmasq-dns" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.156099 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.164217 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.164235 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-x5pjv" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.164226 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.164538 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.183173 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b64d69759-5xv4k"] Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.224855 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.225114 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerName="glance-log" containerID="cri-o://ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718" gracePeriod=30 Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.225518 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerName="glance-httpd" containerID="cri-o://bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6" gracePeriod=30 Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.270679 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.270917 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerName="glance-log" containerID="cri-o://dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910" gracePeriod=30 Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.271046 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerName="glance-httpd" containerID="cri-o://123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b" gracePeriod=30 Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.287483 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56449c6b9f-8wcf8"] Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.289059 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.296481 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-scripts\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.296595 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8zjg\" (UniqueName: \"kubernetes.io/projected/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-kube-api-access-v8zjg\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.296954 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-config-data\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.297006 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-horizon-secret-key\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.297039 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-logs\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.306204 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56449c6b9f-8wcf8"] Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399013 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-config-data\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399291 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-horizon-secret-key\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399321 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bc818af-cd39-4bb4-97a4-9c19766e7136-horizon-secret-key\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399354 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-logs\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399407 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc818af-cd39-4bb4-97a4-9c19766e7136-logs\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399440 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-scripts\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399481 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-config-data\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399503 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc5jb\" (UniqueName: \"kubernetes.io/projected/9bc818af-cd39-4bb4-97a4-9c19766e7136-kube-api-access-vc5jb\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399529 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8zjg\" (UniqueName: \"kubernetes.io/projected/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-kube-api-access-v8zjg\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.399563 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-scripts\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.400508 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-scripts\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.400641 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-logs\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.400818 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-config-data\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.405472 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-horizon-secret-key\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.421639 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8zjg\" (UniqueName: \"kubernetes.io/projected/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-kube-api-access-v8zjg\") pod \"horizon-7b64d69759-5xv4k\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.501104 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-config-data\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.501152 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc5jb\" (UniqueName: \"kubernetes.io/projected/9bc818af-cd39-4bb4-97a4-9c19766e7136-kube-api-access-vc5jb\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.501203 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-scripts\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.501262 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bc818af-cd39-4bb4-97a4-9c19766e7136-horizon-secret-key\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.501324 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc818af-cd39-4bb4-97a4-9c19766e7136-logs\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.501731 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc818af-cd39-4bb4-97a4-9c19766e7136-logs\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.502059 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-scripts\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.502864 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-config-data\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.504642 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bc818af-cd39-4bb4-97a4-9c19766e7136-horizon-secret-key\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.515648 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.522762 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc5jb\" (UniqueName: \"kubernetes.io/projected/9bc818af-cd39-4bb4-97a4-9c19766e7136-kube-api-access-vc5jb\") pod \"horizon-56449c6b9f-8wcf8\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.622046 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.915581 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56449c6b9f-8wcf8"] Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.982848 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84497fcc7f-cv9tp"] Oct 14 09:08:06 crc kubenswrapper[5058]: I1014 09:08:06.984451 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.019854 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84497fcc7f-cv9tp"] Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.109091 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b64d69759-5xv4k"] Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.111741 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63447aef-1de1-4444-8065-7d46756df0ae-horizon-secret-key\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.111822 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63447aef-1de1-4444-8065-7d46756df0ae-logs\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.111865 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmm8\" (UniqueName: \"kubernetes.io/projected/63447aef-1de1-4444-8065-7d46756df0ae-kube-api-access-fgmm8\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.111899 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-config-data\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.111956 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-scripts\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.112281 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:08:07 crc kubenswrapper[5058]: W1014 09:08:07.166639 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bc818af_cd39_4bb4_97a4_9c19766e7136.slice/crio-b9609b7a29bc8da54123038049068a2c434d1f1cddf8f5ca7bd0b1c06ee96173 WatchSource:0}: Error finding container b9609b7a29bc8da54123038049068a2c434d1f1cddf8f5ca7bd0b1c06ee96173: Status 404 returned error can't find the container with id b9609b7a29bc8da54123038049068a2c434d1f1cddf8f5ca7bd0b1c06ee96173 Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.170008 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56449c6b9f-8wcf8"] Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.192833 5058 generic.go:334] "Generic (PLEG): container finished" podID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerID="ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718" exitCode=143 Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.192898 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f727bef8-4971-4bc5-80d2-23eb4840bfbb","Type":"ContainerDied","Data":"ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718"} Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.197155 5058 generic.go:334] "Generic (PLEG): container finished" podID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerID="dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910" exitCode=143 Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.197227 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15","Type":"ContainerDied","Data":"dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910"} Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.198485 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56449c6b9f-8wcf8" event={"ID":"9bc818af-cd39-4bb4-97a4-9c19766e7136","Type":"ContainerStarted","Data":"b9609b7a29bc8da54123038049068a2c434d1f1cddf8f5ca7bd0b1c06ee96173"} Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.199323 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b64d69759-5xv4k" event={"ID":"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a","Type":"ContainerStarted","Data":"432e54ae7898c80a72b650f16a80c7aeaa1b5704775bf581f5fe51ede278f40d"} Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.213183 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63447aef-1de1-4444-8065-7d46756df0ae-horizon-secret-key\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.213268 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63447aef-1de1-4444-8065-7d46756df0ae-logs\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.213312 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgmm8\" (UniqueName: \"kubernetes.io/projected/63447aef-1de1-4444-8065-7d46756df0ae-kube-api-access-fgmm8\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.213353 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-config-data\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.213392 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-scripts\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.213887 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63447aef-1de1-4444-8065-7d46756df0ae-logs\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.214183 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-scripts\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.214914 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-config-data\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.218107 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63447aef-1de1-4444-8065-7d46756df0ae-horizon-secret-key\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.234638 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgmm8\" (UniqueName: \"kubernetes.io/projected/63447aef-1de1-4444-8065-7d46756df0ae-kube-api-access-fgmm8\") pod \"horizon-84497fcc7f-cv9tp\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.341487 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:07 crc kubenswrapper[5058]: I1014 09:08:07.822996 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84497fcc7f-cv9tp"] Oct 14 09:08:08 crc kubenswrapper[5058]: I1014 09:08:08.211828 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84497fcc7f-cv9tp" event={"ID":"63447aef-1de1-4444-8065-7d46756df0ae","Type":"ContainerStarted","Data":"5bbdf303d52a6e8b5cb8ebbaaa39422e756dd2dc6ceb675606b6525566ba5f55"} Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.919036 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.986605 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.987118 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-logs\") pod \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.987162 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-scripts\") pod \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.987184 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-combined-ca-bundle\") pod \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.987240 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-httpd-run\") pod \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.987277 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-config-data\") pod \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.987345 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq2wk\" (UniqueName: \"kubernetes.io/projected/f727bef8-4971-4bc5-80d2-23eb4840bfbb-kube-api-access-mq2wk\") pod \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\" (UID: \"f727bef8-4971-4bc5-80d2-23eb4840bfbb\") " Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.989443 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-logs" (OuterVolumeSpecName: "logs") pod "f727bef8-4971-4bc5-80d2-23eb4840bfbb" (UID: "f727bef8-4971-4bc5-80d2-23eb4840bfbb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.990765 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f727bef8-4971-4bc5-80d2-23eb4840bfbb" (UID: "f727bef8-4971-4bc5-80d2-23eb4840bfbb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.994245 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f727bef8-4971-4bc5-80d2-23eb4840bfbb-kube-api-access-mq2wk" (OuterVolumeSpecName: "kube-api-access-mq2wk") pod "f727bef8-4971-4bc5-80d2-23eb4840bfbb" (UID: "f727bef8-4971-4bc5-80d2-23eb4840bfbb"). InnerVolumeSpecName "kube-api-access-mq2wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:08:09 crc kubenswrapper[5058]: I1014 09:08:09.994634 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-scripts" (OuterVolumeSpecName: "scripts") pod "f727bef8-4971-4bc5-80d2-23eb4840bfbb" (UID: "f727bef8-4971-4bc5-80d2-23eb4840bfbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.022034 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f727bef8-4971-4bc5-80d2-23eb4840bfbb" (UID: "f727bef8-4971-4bc5-80d2-23eb4840bfbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.062065 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-config-data" (OuterVolumeSpecName: "config-data") pod "f727bef8-4971-4bc5-80d2-23eb4840bfbb" (UID: "f727bef8-4971-4bc5-80d2-23eb4840bfbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.088499 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knl5n\" (UniqueName: \"kubernetes.io/projected/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-kube-api-access-knl5n\") pod \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.088586 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-scripts\") pod \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.088664 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-httpd-run\") pod \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.088691 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-logs\") pod \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.088730 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-config-data\") pod \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.088771 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-combined-ca-bundle\") pod \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\" (UID: \"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15\") " Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.089261 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.089278 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.089287 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.089296 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f727bef8-4971-4bc5-80d2-23eb4840bfbb-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.089306 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f727bef8-4971-4bc5-80d2-23eb4840bfbb-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.089314 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq2wk\" (UniqueName: \"kubernetes.io/projected/f727bef8-4971-4bc5-80d2-23eb4840bfbb-kube-api-access-mq2wk\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.091514 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-logs" (OuterVolumeSpecName: "logs") pod "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" (UID: "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.091777 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" (UID: "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.094261 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-kube-api-access-knl5n" (OuterVolumeSpecName: "kube-api-access-knl5n") pod "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" (UID: "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15"). InnerVolumeSpecName "kube-api-access-knl5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.094337 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-scripts" (OuterVolumeSpecName: "scripts") pod "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" (UID: "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.133943 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" (UID: "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.191994 5058 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.192032 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.192040 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.192052 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knl5n\" (UniqueName: \"kubernetes.io/projected/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-kube-api-access-knl5n\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.192061 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.212975 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-config-data" (OuterVolumeSpecName: "config-data") pod "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" (UID: "46eea8ed-0b19-4233-9a00-e3fd1e5a5a15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.280987 5058 generic.go:334] "Generic (PLEG): container finished" podID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerID="123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b" exitCode=0 Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.281071 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15","Type":"ContainerDied","Data":"123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b"} Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.281097 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"46eea8ed-0b19-4233-9a00-e3fd1e5a5a15","Type":"ContainerDied","Data":"d54350ad5935d7c1e96ce861496c51b2f5a8f404a715946c80466559fc884b0f"} Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.281112 5058 scope.go:117] "RemoveContainer" containerID="123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.281238 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.289532 5058 generic.go:334] "Generic (PLEG): container finished" podID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerID="bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6" exitCode=0 Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.289573 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f727bef8-4971-4bc5-80d2-23eb4840bfbb","Type":"ContainerDied","Data":"bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6"} Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.289603 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f727bef8-4971-4bc5-80d2-23eb4840bfbb","Type":"ContainerDied","Data":"0c400045f893859d277f3296e3956be484a7ac57177075a8e5784683da5b0c3c"} Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.289611 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.293123 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.318826 5058 scope.go:117] "RemoveContainer" containerID="dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.336978 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.360392 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.365867 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.372270 5058 scope.go:117] "RemoveContainer" containerID="123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.373457 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:08:10 crc kubenswrapper[5058]: E1014 09:08:10.378394 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b\": container with ID starting with 123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b not found: ID does not exist" containerID="123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.378438 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b"} err="failed to get container status \"123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b\": rpc error: code = NotFound desc = could not find container \"123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b\": container with ID starting with 123e3758f788fe21746a81fd7a44fde89e5d2c9ae4f4ba655364de334097036b not found: ID does not exist" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.378468 5058 scope.go:117] "RemoveContainer" containerID="dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910" Oct 14 09:08:10 crc kubenswrapper[5058]: E1014 09:08:10.379333 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910\": container with ID starting with dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910 not found: ID does not exist" containerID="dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.379371 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910"} err="failed to get container status \"dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910\": rpc error: code = NotFound desc = could not find container \"dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910\": container with ID starting with dd6134dc6ffbe18c79c25fd0489d473fb69023ea00eaec2ff6348256452b5910 not found: ID does not exist" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.379398 5058 scope.go:117] "RemoveContainer" containerID="bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.384567 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:08:10 crc kubenswrapper[5058]: E1014 09:08:10.385047 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerName="glance-log" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.385062 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerName="glance-log" Oct 14 09:08:10 crc kubenswrapper[5058]: E1014 09:08:10.385073 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerName="glance-httpd" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.385079 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerName="glance-httpd" Oct 14 09:08:10 crc kubenswrapper[5058]: E1014 09:08:10.385090 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerName="glance-httpd" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.385098 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerName="glance-httpd" Oct 14 09:08:10 crc kubenswrapper[5058]: E1014 09:08:10.385106 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerName="glance-log" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.385112 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerName="glance-log" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.385307 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerName="glance-log" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.385323 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" containerName="glance-httpd" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.385336 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerName="glance-log" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.385349 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" containerName="glance-httpd" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.386382 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.391100 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gf2t8" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.391325 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.391589 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.394698 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.405184 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.406775 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.412904 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.414996 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.437648 5058 scope.go:117] "RemoveContainer" containerID="ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.474033 5058 scope.go:117] "RemoveContainer" containerID="bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6" Oct 14 09:08:10 crc kubenswrapper[5058]: E1014 09:08:10.474508 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6\": container with ID starting with bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6 not found: ID does not exist" containerID="bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.474536 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6"} err="failed to get container status \"bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6\": rpc error: code = NotFound desc = could not find container \"bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6\": container with ID starting with bae0e4e53a3c2a2fca4cdb460b978c90fea2e7f2d26789ded412c63ac2c838a6 not found: ID does not exist" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.474555 5058 scope.go:117] "RemoveContainer" containerID="ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718" Oct 14 09:08:10 crc kubenswrapper[5058]: E1014 09:08:10.474952 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718\": container with ID starting with ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718 not found: ID does not exist" containerID="ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.474978 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718"} err="failed to get container status \"ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718\": rpc error: code = NotFound desc = could not find container \"ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718\": container with ID starting with ee322d2b2ddd024ac37cda77198656078be8d7f7a55777a642fbf217fcd69718 not found: ID does not exist" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496132 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x4zr\" (UniqueName: \"kubernetes.io/projected/53779db0-8c45-4fac-bee5-4ed03edabbb5-kube-api-access-9x4zr\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496177 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496237 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53779db0-8c45-4fac-bee5-4ed03edabbb5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496287 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53779db0-8c45-4fac-bee5-4ed03edabbb5-scripts\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496312 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496335 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9gxd\" (UniqueName: \"kubernetes.io/projected/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-kube-api-access-d9gxd\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496386 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53779db0-8c45-4fac-bee5-4ed03edabbb5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496414 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496477 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496498 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53779db0-8c45-4fac-bee5-4ed03edabbb5-logs\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496553 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53779db0-8c45-4fac-bee5-4ed03edabbb5-config-data\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.496684 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598165 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x4zr\" (UniqueName: \"kubernetes.io/projected/53779db0-8c45-4fac-bee5-4ed03edabbb5-kube-api-access-9x4zr\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598233 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598279 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53779db0-8c45-4fac-bee5-4ed03edabbb5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598310 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53779db0-8c45-4fac-bee5-4ed03edabbb5-scripts\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598351 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598374 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9gxd\" (UniqueName: \"kubernetes.io/projected/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-kube-api-access-d9gxd\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598395 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53779db0-8c45-4fac-bee5-4ed03edabbb5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598443 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598483 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598560 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53779db0-8c45-4fac-bee5-4ed03edabbb5-logs\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598616 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53779db0-8c45-4fac-bee5-4ed03edabbb5-config-data\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.598702 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.599014 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53779db0-8c45-4fac-bee5-4ed03edabbb5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.599392 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.600404 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53779db0-8c45-4fac-bee5-4ed03edabbb5-logs\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.600774 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.602621 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53779db0-8c45-4fac-bee5-4ed03edabbb5-scripts\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.605878 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53779db0-8c45-4fac-bee5-4ed03edabbb5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.608968 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53779db0-8c45-4fac-bee5-4ed03edabbb5-config-data\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.611764 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.614879 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x4zr\" (UniqueName: \"kubernetes.io/projected/53779db0-8c45-4fac-bee5-4ed03edabbb5-kube-api-access-9x4zr\") pod \"glance-default-external-api-0\" (UID: \"53779db0-8c45-4fac-bee5-4ed03edabbb5\") " pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.615570 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.617515 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9gxd\" (UniqueName: \"kubernetes.io/projected/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-kube-api-access-d9gxd\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.620015 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3\") " pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.724705 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.733763 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.830530 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46eea8ed-0b19-4233-9a00-e3fd1e5a5a15" path="/var/lib/kubelet/pods/46eea8ed-0b19-4233-9a00-e3fd1e5a5a15/volumes" Oct 14 09:08:10 crc kubenswrapper[5058]: I1014 09:08:10.831313 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f727bef8-4971-4bc5-80d2-23eb4840bfbb" path="/var/lib/kubelet/pods/f727bef8-4971-4bc5-80d2-23eb4840bfbb/volumes" Oct 14 09:08:11 crc kubenswrapper[5058]: I1014 09:08:11.208586 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 09:08:11 crc kubenswrapper[5058]: I1014 09:08:11.309357 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 09:08:16 crc kubenswrapper[5058]: W1014 09:08:16.085014 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d5fc38_a61f_494a_ae5b_6cc9dc35f6c3.slice/crio-8c49ccadd911db2d5fbc20b194676a4c67b5d961c530c90d26c012528dad3250 WatchSource:0}: Error finding container 8c49ccadd911db2d5fbc20b194676a4c67b5d961c530c90d26c012528dad3250: Status 404 returned error can't find the container with id 8c49ccadd911db2d5fbc20b194676a4c67b5d961c530c90d26c012528dad3250 Oct 14 09:08:16 crc kubenswrapper[5058]: I1014 09:08:16.367846 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3","Type":"ContainerStarted","Data":"8c49ccadd911db2d5fbc20b194676a4c67b5d961c530c90d26c012528dad3250"} Oct 14 09:08:16 crc kubenswrapper[5058]: I1014 09:08:16.370886 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53779db0-8c45-4fac-bee5-4ed03edabbb5","Type":"ContainerStarted","Data":"e1487db695f025c0a7c69441c4fdf651b3c2c263c076bb983dd18531453e5124"} Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.381014 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53779db0-8c45-4fac-bee5-4ed03edabbb5","Type":"ContainerStarted","Data":"26b1b130144bb709ceef66d6661b3fe4d3df9538fae9d5bdb362ee3f2a06bbe1"} Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.383550 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56449c6b9f-8wcf8" event={"ID":"9bc818af-cd39-4bb4-97a4-9c19766e7136","Type":"ContainerStarted","Data":"4621a49938f4fc0fa6dfb437ed0c2cb1e2820f94912910415e72278be0d7e30a"} Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.383599 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56449c6b9f-8wcf8" event={"ID":"9bc818af-cd39-4bb4-97a4-9c19766e7136","Type":"ContainerStarted","Data":"4aa20c779b855c39be85093dffc89e4301f0ad5b479d68bd98719f956fa2bbd2"} Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.383683 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56449c6b9f-8wcf8" podUID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerName="horizon" containerID="cri-o://4621a49938f4fc0fa6dfb437ed0c2cb1e2820f94912910415e72278be0d7e30a" gracePeriod=30 Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.383973 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56449c6b9f-8wcf8" podUID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerName="horizon-log" containerID="cri-o://4aa20c779b855c39be85093dffc89e4301f0ad5b479d68bd98719f956fa2bbd2" gracePeriod=30 Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.390158 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b64d69759-5xv4k" event={"ID":"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a","Type":"ContainerStarted","Data":"68be36fa297877b51b05babd72ad1851699e5ded3f50bee9688f36bbac8dd838"} Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.390193 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b64d69759-5xv4k" event={"ID":"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a","Type":"ContainerStarted","Data":"f6887742664d6289d8a7fa06ee72ea9ed0d81273a546a4f0279be9e0a6fe30c7"} Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.391651 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84497fcc7f-cv9tp" event={"ID":"63447aef-1de1-4444-8065-7d46756df0ae","Type":"ContainerStarted","Data":"86d3514c3c6b4efc97d5d54b3ffe0c368ad4dccc85903d248ad69cd7ffcdb79f"} Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.391688 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84497fcc7f-cv9tp" event={"ID":"63447aef-1de1-4444-8065-7d46756df0ae","Type":"ContainerStarted","Data":"60cdd7390d59d32e4f2e315c05d4b96eefd406ab2c7f3ba099fe2911df0cba45"} Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.394985 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3","Type":"ContainerStarted","Data":"b1811c39adb41c69f041e33af85920c5739dcad57d73c57fef07cb87275fae6a"} Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.409001 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56449c6b9f-8wcf8" podStartSLOduration=2.353427927 podStartE2EDuration="11.408985009s" podCreationTimestamp="2025-10-14 09:08:06 +0000 UTC" firstStartedPulling="2025-10-14 09:08:07.169364553 +0000 UTC m=+8435.080448359" lastFinishedPulling="2025-10-14 09:08:16.224921625 +0000 UTC m=+8444.136005441" observedRunningTime="2025-10-14 09:08:17.407838767 +0000 UTC m=+8445.318922573" watchObservedRunningTime="2025-10-14 09:08:17.408985009 +0000 UTC m=+8445.320068815" Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.436678 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b64d69759-5xv4k" podStartSLOduration=2.343639847 podStartE2EDuration="11.43664824s" podCreationTimestamp="2025-10-14 09:08:06 +0000 UTC" firstStartedPulling="2025-10-14 09:08:07.112106636 +0000 UTC m=+8435.023190442" lastFinishedPulling="2025-10-14 09:08:16.205115009 +0000 UTC m=+8444.116198835" observedRunningTime="2025-10-14 09:08:17.427244171 +0000 UTC m=+8445.338327977" watchObservedRunningTime="2025-10-14 09:08:17.43664824 +0000 UTC m=+8445.347732046" Oct 14 09:08:17 crc kubenswrapper[5058]: I1014 09:08:17.453294 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-84497fcc7f-cv9tp" podStartSLOduration=3.061018809 podStartE2EDuration="11.453280756s" podCreationTimestamp="2025-10-14 09:08:06 +0000 UTC" firstStartedPulling="2025-10-14 09:08:07.83098864 +0000 UTC m=+8435.742072446" lastFinishedPulling="2025-10-14 09:08:16.223250577 +0000 UTC m=+8444.134334393" observedRunningTime="2025-10-14 09:08:17.448270673 +0000 UTC m=+8445.359354479" watchObservedRunningTime="2025-10-14 09:08:17.453280756 +0000 UTC m=+8445.364364572" Oct 14 09:08:18 crc kubenswrapper[5058]: I1014 09:08:18.406933 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3","Type":"ContainerStarted","Data":"ca447c80b3c0dcb0eda5b1a75c462eb3932dd8ad95e0d349ca137bbebf45cb33"} Oct 14 09:08:18 crc kubenswrapper[5058]: I1014 09:08:18.410783 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53779db0-8c45-4fac-bee5-4ed03edabbb5","Type":"ContainerStarted","Data":"5c71ae51c6620559d34c28a38b27d3a38e69de580dd8af0e08c879f4136ca958"} Oct 14 09:08:18 crc kubenswrapper[5058]: I1014 09:08:18.440208 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.440191262999999 podStartE2EDuration="8.440191263s" podCreationTimestamp="2025-10-14 09:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:08:18.437490316 +0000 UTC m=+8446.348574162" watchObservedRunningTime="2025-10-14 09:08:18.440191263 +0000 UTC m=+8446.351275069" Oct 14 09:08:18 crc kubenswrapper[5058]: I1014 09:08:18.493836 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.493813447 podStartE2EDuration="8.493813447s" podCreationTimestamp="2025-10-14 09:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:08:18.484323255 +0000 UTC m=+8446.395407081" watchObservedRunningTime="2025-10-14 09:08:18.493813447 +0000 UTC m=+8446.404897263" Oct 14 09:08:20 crc kubenswrapper[5058]: I1014 09:08:20.725818 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 09:08:20 crc kubenswrapper[5058]: I1014 09:08:20.726192 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 09:08:20 crc kubenswrapper[5058]: I1014 09:08:20.742046 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:20 crc kubenswrapper[5058]: I1014 09:08:20.742096 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:20 crc kubenswrapper[5058]: I1014 09:08:20.760986 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 09:08:20 crc kubenswrapper[5058]: I1014 09:08:20.788786 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:20 crc kubenswrapper[5058]: I1014 09:08:20.816095 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 09:08:20 crc kubenswrapper[5058]: I1014 09:08:20.817393 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:21 crc kubenswrapper[5058]: I1014 09:08:21.443258 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:21 crc kubenswrapper[5058]: I1014 09:08:21.443751 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:21 crc kubenswrapper[5058]: I1014 09:08:21.443764 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 09:08:21 crc kubenswrapper[5058]: I1014 09:08:21.443774 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 09:08:23 crc kubenswrapper[5058]: I1014 09:08:23.461463 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 09:08:23 crc kubenswrapper[5058]: I1014 09:08:23.570752 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 09:08:23 crc kubenswrapper[5058]: I1014 09:08:23.617846 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:24 crc kubenswrapper[5058]: I1014 09:08:24.472969 5058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 09:08:24 crc kubenswrapper[5058]: I1014 09:08:24.738858 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 09:08:24 crc kubenswrapper[5058]: I1014 09:08:24.745880 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 09:08:26 crc kubenswrapper[5058]: I1014 09:08:26.517298 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:26 crc kubenswrapper[5058]: I1014 09:08:26.517739 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:26 crc kubenswrapper[5058]: I1014 09:08:26.520141 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b64d69759-5xv4k" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.139:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.139:8080: connect: connection refused" Oct 14 09:08:26 crc kubenswrapper[5058]: I1014 09:08:26.622621 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:27 crc kubenswrapper[5058]: I1014 09:08:27.342045 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:27 crc kubenswrapper[5058]: I1014 09:08:27.342120 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:27 crc kubenswrapper[5058]: I1014 09:08:27.343647 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-84497fcc7f-cv9tp" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.141:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.141:8080: connect: connection refused" Oct 14 09:08:33 crc kubenswrapper[5058]: I1014 09:08:33.656203 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:08:33 crc kubenswrapper[5058]: I1014 09:08:33.656810 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:08:38 crc kubenswrapper[5058]: I1014 09:08:38.452674 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:39 crc kubenswrapper[5058]: I1014 09:08:39.194615 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:40 crc kubenswrapper[5058]: I1014 09:08:40.124892 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:08:40 crc kubenswrapper[5058]: I1014 09:08:40.940094 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:08:41 crc kubenswrapper[5058]: I1014 09:08:41.029588 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b64d69759-5xv4k"] Oct 14 09:08:41 crc kubenswrapper[5058]: I1014 09:08:41.029916 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b64d69759-5xv4k" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon-log" containerID="cri-o://f6887742664d6289d8a7fa06ee72ea9ed0d81273a546a4f0279be9e0a6fe30c7" gracePeriod=30 Oct 14 09:08:41 crc kubenswrapper[5058]: I1014 09:08:41.030334 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b64d69759-5xv4k" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon" containerID="cri-o://68be36fa297877b51b05babd72ad1851699e5ded3f50bee9688f36bbac8dd838" gracePeriod=30 Oct 14 09:08:44 crc kubenswrapper[5058]: I1014 09:08:44.771144 5058 generic.go:334] "Generic (PLEG): container finished" podID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerID="68be36fa297877b51b05babd72ad1851699e5ded3f50bee9688f36bbac8dd838" exitCode=0 Oct 14 09:08:44 crc kubenswrapper[5058]: I1014 09:08:44.771246 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b64d69759-5xv4k" event={"ID":"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a","Type":"ContainerDied","Data":"68be36fa297877b51b05babd72ad1851699e5ded3f50bee9688f36bbac8dd838"} Oct 14 09:08:46 crc kubenswrapper[5058]: I1014 09:08:46.517696 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b64d69759-5xv4k" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.139:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.139:8080: connect: connection refused" Oct 14 09:08:47 crc kubenswrapper[5058]: E1014 09:08:47.717725 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bc818af_cd39_4bb4_97a4_9c19766e7136.slice/crio-conmon-4aa20c779b855c39be85093dffc89e4301f0ad5b479d68bd98719f956fa2bbd2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bc818af_cd39_4bb4_97a4_9c19766e7136.slice/crio-4621a49938f4fc0fa6dfb437ed0c2cb1e2820f94912910415e72278be0d7e30a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bc818af_cd39_4bb4_97a4_9c19766e7136.slice/crio-conmon-4621a49938f4fc0fa6dfb437ed0c2cb1e2820f94912910415e72278be0d7e30a.scope\": RecentStats: unable to find data in memory cache]" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.802877 5058 generic.go:334] "Generic (PLEG): container finished" podID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerID="4621a49938f4fc0fa6dfb437ed0c2cb1e2820f94912910415e72278be0d7e30a" exitCode=137 Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.803181 5058 generic.go:334] "Generic (PLEG): container finished" podID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerID="4aa20c779b855c39be85093dffc89e4301f0ad5b479d68bd98719f956fa2bbd2" exitCode=137 Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.802979 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56449c6b9f-8wcf8" event={"ID":"9bc818af-cd39-4bb4-97a4-9c19766e7136","Type":"ContainerDied","Data":"4621a49938f4fc0fa6dfb437ed0c2cb1e2820f94912910415e72278be0d7e30a"} Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.803211 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56449c6b9f-8wcf8" event={"ID":"9bc818af-cd39-4bb4-97a4-9c19766e7136","Type":"ContainerDied","Data":"4aa20c779b855c39be85093dffc89e4301f0ad5b479d68bd98719f956fa2bbd2"} Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.803221 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56449c6b9f-8wcf8" event={"ID":"9bc818af-cd39-4bb4-97a4-9c19766e7136","Type":"ContainerDied","Data":"b9609b7a29bc8da54123038049068a2c434d1f1cddf8f5ca7bd0b1c06ee96173"} Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.803231 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9609b7a29bc8da54123038049068a2c434d1f1cddf8f5ca7bd0b1c06ee96173" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.824057 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.882471 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc5jb\" (UniqueName: \"kubernetes.io/projected/9bc818af-cd39-4bb4-97a4-9c19766e7136-kube-api-access-vc5jb\") pod \"9bc818af-cd39-4bb4-97a4-9c19766e7136\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.882659 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-scripts\") pod \"9bc818af-cd39-4bb4-97a4-9c19766e7136\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.883480 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-config-data\") pod \"9bc818af-cd39-4bb4-97a4-9c19766e7136\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.884093 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bc818af-cd39-4bb4-97a4-9c19766e7136-horizon-secret-key\") pod \"9bc818af-cd39-4bb4-97a4-9c19766e7136\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.884467 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc818af-cd39-4bb4-97a4-9c19766e7136-logs\") pod \"9bc818af-cd39-4bb4-97a4-9c19766e7136\" (UID: \"9bc818af-cd39-4bb4-97a4-9c19766e7136\") " Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.884922 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bc818af-cd39-4bb4-97a4-9c19766e7136-logs" (OuterVolumeSpecName: "logs") pod "9bc818af-cd39-4bb4-97a4-9c19766e7136" (UID: "9bc818af-cd39-4bb4-97a4-9c19766e7136"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.886122 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc818af-cd39-4bb4-97a4-9c19766e7136-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.894255 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc818af-cd39-4bb4-97a4-9c19766e7136-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9bc818af-cd39-4bb4-97a4-9c19766e7136" (UID: "9bc818af-cd39-4bb4-97a4-9c19766e7136"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.902857 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc818af-cd39-4bb4-97a4-9c19766e7136-kube-api-access-vc5jb" (OuterVolumeSpecName: "kube-api-access-vc5jb") pod "9bc818af-cd39-4bb4-97a4-9c19766e7136" (UID: "9bc818af-cd39-4bb4-97a4-9c19766e7136"). InnerVolumeSpecName "kube-api-access-vc5jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.919690 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-config-data" (OuterVolumeSpecName: "config-data") pod "9bc818af-cd39-4bb4-97a4-9c19766e7136" (UID: "9bc818af-cd39-4bb4-97a4-9c19766e7136"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.922861 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-scripts" (OuterVolumeSpecName: "scripts") pod "9bc818af-cd39-4bb4-97a4-9c19766e7136" (UID: "9bc818af-cd39-4bb4-97a4-9c19766e7136"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.987347 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc5jb\" (UniqueName: \"kubernetes.io/projected/9bc818af-cd39-4bb4-97a4-9c19766e7136-kube-api-access-vc5jb\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.987374 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.987384 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bc818af-cd39-4bb4-97a4-9c19766e7136-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:47 crc kubenswrapper[5058]: I1014 09:08:47.987394 5058 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9bc818af-cd39-4bb4-97a4-9c19766e7136-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:08:48 crc kubenswrapper[5058]: I1014 09:08:48.818130 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56449c6b9f-8wcf8" Oct 14 09:08:48 crc kubenswrapper[5058]: I1014 09:08:48.888223 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56449c6b9f-8wcf8"] Oct 14 09:08:48 crc kubenswrapper[5058]: I1014 09:08:48.902435 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56449c6b9f-8wcf8"] Oct 14 09:08:50 crc kubenswrapper[5058]: I1014 09:08:50.814011 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc818af-cd39-4bb4-97a4-9c19766e7136" path="/var/lib/kubelet/pods/9bc818af-cd39-4bb4-97a4-9c19766e7136/volumes" Oct 14 09:08:56 crc kubenswrapper[5058]: I1014 09:08:56.517530 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b64d69759-5xv4k" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.139:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.139:8080: connect: connection refused" Oct 14 09:09:03 crc kubenswrapper[5058]: I1014 09:09:03.656238 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:09:03 crc kubenswrapper[5058]: I1014 09:09:03.657274 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:09:06 crc kubenswrapper[5058]: I1014 09:09:06.517285 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b64d69759-5xv4k" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.139:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.139:8080: connect: connection refused" Oct 14 09:09:06 crc kubenswrapper[5058]: I1014 09:09:06.518028 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.112502 5058 generic.go:334] "Generic (PLEG): container finished" podID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerID="f6887742664d6289d8a7fa06ee72ea9ed0d81273a546a4f0279be9e0a6fe30c7" exitCode=137 Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.112584 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b64d69759-5xv4k" event={"ID":"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a","Type":"ContainerDied","Data":"f6887742664d6289d8a7fa06ee72ea9ed0d81273a546a4f0279be9e0a6fe30c7"} Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.528034 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.728584 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-horizon-secret-key\") pod \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.728737 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-config-data\") pod \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.728904 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-scripts\") pod \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.728931 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-logs\") pod \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.728973 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8zjg\" (UniqueName: \"kubernetes.io/projected/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-kube-api-access-v8zjg\") pod \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\" (UID: \"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a\") " Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.729950 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-logs" (OuterVolumeSpecName: "logs") pod "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" (UID: "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.734967 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" (UID: "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.735081 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-kube-api-access-v8zjg" (OuterVolumeSpecName: "kube-api-access-v8zjg") pod "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" (UID: "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a"). InnerVolumeSpecName "kube-api-access-v8zjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.755726 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-config-data" (OuterVolumeSpecName: "config-data") pod "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" (UID: "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.783694 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-scripts" (OuterVolumeSpecName: "scripts") pod "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" (UID: "f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.831060 5058 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.833061 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.834724 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.835163 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:11 crc kubenswrapper[5058]: I1014 09:09:11.835496 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8zjg\" (UniqueName: \"kubernetes.io/projected/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a-kube-api-access-v8zjg\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:12 crc kubenswrapper[5058]: I1014 09:09:12.134405 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b64d69759-5xv4k" event={"ID":"f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a","Type":"ContainerDied","Data":"432e54ae7898c80a72b650f16a80c7aeaa1b5704775bf581f5fe51ede278f40d"} Oct 14 09:09:12 crc kubenswrapper[5058]: I1014 09:09:12.135829 5058 scope.go:117] "RemoveContainer" containerID="68be36fa297877b51b05babd72ad1851699e5ded3f50bee9688f36bbac8dd838" Oct 14 09:09:12 crc kubenswrapper[5058]: I1014 09:09:12.134525 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b64d69759-5xv4k" Oct 14 09:09:12 crc kubenswrapper[5058]: I1014 09:09:12.212037 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b64d69759-5xv4k"] Oct 14 09:09:12 crc kubenswrapper[5058]: I1014 09:09:12.216726 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b64d69759-5xv4k"] Oct 14 09:09:12 crc kubenswrapper[5058]: I1014 09:09:12.426558 5058 scope.go:117] "RemoveContainer" containerID="f6887742664d6289d8a7fa06ee72ea9ed0d81273a546a4f0279be9e0a6fe30c7" Oct 14 09:09:12 crc kubenswrapper[5058]: I1014 09:09:12.813764 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" path="/var/lib/kubelet/pods/f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a/volumes" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.933604 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-586f669495-t2sn2"] Oct 14 09:09:23 crc kubenswrapper[5058]: E1014 09:09:23.934443 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.934457 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon" Oct 14 09:09:23 crc kubenswrapper[5058]: E1014 09:09:23.934470 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerName="horizon-log" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.934475 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerName="horizon-log" Oct 14 09:09:23 crc kubenswrapper[5058]: E1014 09:09:23.934508 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon-log" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.934515 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon-log" Oct 14 09:09:23 crc kubenswrapper[5058]: E1014 09:09:23.934524 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerName="horizon" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.934530 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerName="horizon" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.934707 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.934719 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerName="horizon" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.934736 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc818af-cd39-4bb4-97a4-9c19766e7136" containerName="horizon-log" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.934748 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6de1e82-0a43-4e0d-b4cc-ea2912f24e2a" containerName="horizon-log" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.935772 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:23 crc kubenswrapper[5058]: I1014 09:09:23.959416 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-586f669495-t2sn2"] Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.008456 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frg2z\" (UniqueName: \"kubernetes.io/projected/cdabbf6e-8a7a-4907-866c-e230606ad5a7-kube-api-access-frg2z\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.008532 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdabbf6e-8a7a-4907-866c-e230606ad5a7-logs\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.008591 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdabbf6e-8a7a-4907-866c-e230606ad5a7-config-data\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.008737 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdabbf6e-8a7a-4907-866c-e230606ad5a7-horizon-secret-key\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.008754 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdabbf6e-8a7a-4907-866c-e230606ad5a7-scripts\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.110545 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frg2z\" (UniqueName: \"kubernetes.io/projected/cdabbf6e-8a7a-4907-866c-e230606ad5a7-kube-api-access-frg2z\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.110925 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdabbf6e-8a7a-4907-866c-e230606ad5a7-logs\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.111272 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdabbf6e-8a7a-4907-866c-e230606ad5a7-logs\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.112360 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdabbf6e-8a7a-4907-866c-e230606ad5a7-config-data\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.111363 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdabbf6e-8a7a-4907-866c-e230606ad5a7-config-data\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.112540 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdabbf6e-8a7a-4907-866c-e230606ad5a7-horizon-secret-key\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.112565 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdabbf6e-8a7a-4907-866c-e230606ad5a7-scripts\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.113497 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdabbf6e-8a7a-4907-866c-e230606ad5a7-scripts\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.132402 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdabbf6e-8a7a-4907-866c-e230606ad5a7-horizon-secret-key\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.132579 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frg2z\" (UniqueName: \"kubernetes.io/projected/cdabbf6e-8a7a-4907-866c-e230606ad5a7-kube-api-access-frg2z\") pod \"horizon-586f669495-t2sn2\" (UID: \"cdabbf6e-8a7a-4907-866c-e230606ad5a7\") " pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.285843 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:24 crc kubenswrapper[5058]: I1014 09:09:24.801734 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-586f669495-t2sn2"] Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.322413 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-586f669495-t2sn2" event={"ID":"cdabbf6e-8a7a-4907-866c-e230606ad5a7","Type":"ContainerStarted","Data":"1e349a8c9775e6b7cad84cdc5efe7d84c44ab2fcb5ca210f1894ce35c36e58ec"} Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.322460 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-586f669495-t2sn2" event={"ID":"cdabbf6e-8a7a-4907-866c-e230606ad5a7","Type":"ContainerStarted","Data":"2e6a5264c3dae5fc1afee099ac3625fc0607ec52789448d170f182af5cfb5089"} Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.322470 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-586f669495-t2sn2" event={"ID":"cdabbf6e-8a7a-4907-866c-e230606ad5a7","Type":"ContainerStarted","Data":"53ba2a8007e12049f20e6a397ad85a044a0e871a89ec365137b70a89c595c67d"} Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.381824 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-586f669495-t2sn2" podStartSLOduration=2.381786111 podStartE2EDuration="2.381786111s" podCreationTimestamp="2025-10-14 09:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:09:25.35551588 +0000 UTC m=+8513.266599696" watchObservedRunningTime="2025-10-14 09:09:25.381786111 +0000 UTC m=+8513.292869917" Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.383583 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-wq4qg"] Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.384888 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wq4qg" Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.400732 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-wq4qg"] Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.548609 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdngx\" (UniqueName: \"kubernetes.io/projected/a6cee57d-31cb-4d16-8424-f0c19da4ea13-kube-api-access-vdngx\") pod \"heat-db-create-wq4qg\" (UID: \"a6cee57d-31cb-4d16-8424-f0c19da4ea13\") " pod="openstack/heat-db-create-wq4qg" Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.651983 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdngx\" (UniqueName: \"kubernetes.io/projected/a6cee57d-31cb-4d16-8424-f0c19da4ea13-kube-api-access-vdngx\") pod \"heat-db-create-wq4qg\" (UID: \"a6cee57d-31cb-4d16-8424-f0c19da4ea13\") " pod="openstack/heat-db-create-wq4qg" Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.689938 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdngx\" (UniqueName: \"kubernetes.io/projected/a6cee57d-31cb-4d16-8424-f0c19da4ea13-kube-api-access-vdngx\") pod \"heat-db-create-wq4qg\" (UID: \"a6cee57d-31cb-4d16-8424-f0c19da4ea13\") " pod="openstack/heat-db-create-wq4qg" Oct 14 09:09:25 crc kubenswrapper[5058]: I1014 09:09:25.755909 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wq4qg" Oct 14 09:09:26 crc kubenswrapper[5058]: I1014 09:09:26.211952 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-wq4qg"] Oct 14 09:09:26 crc kubenswrapper[5058]: I1014 09:09:26.340014 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wq4qg" event={"ID":"a6cee57d-31cb-4d16-8424-f0c19da4ea13","Type":"ContainerStarted","Data":"c890eb8787ef344f67868c5c4bf086919c274ef30452e8deb7c7b28f67d560fe"} Oct 14 09:09:27 crc kubenswrapper[5058]: I1014 09:09:27.350304 5058 generic.go:334] "Generic (PLEG): container finished" podID="a6cee57d-31cb-4d16-8424-f0c19da4ea13" containerID="e5fad26eb986a98a5417b9501dba4ea40f52076a394a23240512fb8d0ba56ece" exitCode=0 Oct 14 09:09:27 crc kubenswrapper[5058]: I1014 09:09:27.350466 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wq4qg" event={"ID":"a6cee57d-31cb-4d16-8424-f0c19da4ea13","Type":"ContainerDied","Data":"e5fad26eb986a98a5417b9501dba4ea40f52076a394a23240512fb8d0ba56ece"} Oct 14 09:09:28 crc kubenswrapper[5058]: I1014 09:09:28.801961 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wq4qg" Oct 14 09:09:28 crc kubenswrapper[5058]: I1014 09:09:28.942871 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdngx\" (UniqueName: \"kubernetes.io/projected/a6cee57d-31cb-4d16-8424-f0c19da4ea13-kube-api-access-vdngx\") pod \"a6cee57d-31cb-4d16-8424-f0c19da4ea13\" (UID: \"a6cee57d-31cb-4d16-8424-f0c19da4ea13\") " Oct 14 09:09:28 crc kubenswrapper[5058]: I1014 09:09:28.949395 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cee57d-31cb-4d16-8424-f0c19da4ea13-kube-api-access-vdngx" (OuterVolumeSpecName: "kube-api-access-vdngx") pod "a6cee57d-31cb-4d16-8424-f0c19da4ea13" (UID: "a6cee57d-31cb-4d16-8424-f0c19da4ea13"). InnerVolumeSpecName "kube-api-access-vdngx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:09:29 crc kubenswrapper[5058]: I1014 09:09:29.045447 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdngx\" (UniqueName: \"kubernetes.io/projected/a6cee57d-31cb-4d16-8424-f0c19da4ea13-kube-api-access-vdngx\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:29 crc kubenswrapper[5058]: I1014 09:09:29.380282 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wq4qg" event={"ID":"a6cee57d-31cb-4d16-8424-f0c19da4ea13","Type":"ContainerDied","Data":"c890eb8787ef344f67868c5c4bf086919c274ef30452e8deb7c7b28f67d560fe"} Oct 14 09:09:29 crc kubenswrapper[5058]: I1014 09:09:29.380562 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c890eb8787ef344f67868c5c4bf086919c274ef30452e8deb7c7b28f67d560fe" Oct 14 09:09:29 crc kubenswrapper[5058]: I1014 09:09:29.380369 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wq4qg" Oct 14 09:09:33 crc kubenswrapper[5058]: I1014 09:09:33.656788 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:09:33 crc kubenswrapper[5058]: I1014 09:09:33.657180 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:09:33 crc kubenswrapper[5058]: I1014 09:09:33.657248 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:09:33 crc kubenswrapper[5058]: I1014 09:09:33.658040 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:09:33 crc kubenswrapper[5058]: I1014 09:09:33.658133 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" gracePeriod=600 Oct 14 09:09:33 crc kubenswrapper[5058]: E1014 09:09:33.790148 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:09:34 crc kubenswrapper[5058]: I1014 09:09:34.286618 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:34 crc kubenswrapper[5058]: I1014 09:09:34.287522 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:34 crc kubenswrapper[5058]: I1014 09:09:34.443694 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" exitCode=0 Oct 14 09:09:34 crc kubenswrapper[5058]: I1014 09:09:34.444416 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74"} Oct 14 09:09:34 crc kubenswrapper[5058]: I1014 09:09:34.444501 5058 scope.go:117] "RemoveContainer" containerID="981e068c2e90f44df2f90325ad6a7c204ea7b27ef02d846b87dabb2374178337" Oct 14 09:09:34 crc kubenswrapper[5058]: I1014 09:09:34.445593 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:09:34 crc kubenswrapper[5058]: E1014 09:09:34.446058 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.527919 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-98ec-account-create-qs55p"] Oct 14 09:09:35 crc kubenswrapper[5058]: E1014 09:09:35.528865 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cee57d-31cb-4d16-8424-f0c19da4ea13" containerName="mariadb-database-create" Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.528890 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cee57d-31cb-4d16-8424-f0c19da4ea13" containerName="mariadb-database-create" Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.529239 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cee57d-31cb-4d16-8424-f0c19da4ea13" containerName="mariadb-database-create" Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.530268 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-98ec-account-create-qs55p" Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.533203 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.569599 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-98ec-account-create-qs55p"] Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.724974 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kdg\" (UniqueName: \"kubernetes.io/projected/828a2275-8a5a-4686-893f-acd3c9dbc003-kube-api-access-55kdg\") pod \"heat-98ec-account-create-qs55p\" (UID: \"828a2275-8a5a-4686-893f-acd3c9dbc003\") " pod="openstack/heat-98ec-account-create-qs55p" Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.826772 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55kdg\" (UniqueName: \"kubernetes.io/projected/828a2275-8a5a-4686-893f-acd3c9dbc003-kube-api-access-55kdg\") pod \"heat-98ec-account-create-qs55p\" (UID: \"828a2275-8a5a-4686-893f-acd3c9dbc003\") " pod="openstack/heat-98ec-account-create-qs55p" Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.850417 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55kdg\" (UniqueName: \"kubernetes.io/projected/828a2275-8a5a-4686-893f-acd3c9dbc003-kube-api-access-55kdg\") pod \"heat-98ec-account-create-qs55p\" (UID: \"828a2275-8a5a-4686-893f-acd3c9dbc003\") " pod="openstack/heat-98ec-account-create-qs55p" Oct 14 09:09:35 crc kubenswrapper[5058]: I1014 09:09:35.866350 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-98ec-account-create-qs55p" Oct 14 09:09:36 crc kubenswrapper[5058]: W1014 09:09:36.357381 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod828a2275_8a5a_4686_893f_acd3c9dbc003.slice/crio-cf7b738237e1eb55e55319625a9031b04b4eb351ea7473795464e995c16e6ba9 WatchSource:0}: Error finding container cf7b738237e1eb55e55319625a9031b04b4eb351ea7473795464e995c16e6ba9: Status 404 returned error can't find the container with id cf7b738237e1eb55e55319625a9031b04b4eb351ea7473795464e995c16e6ba9 Oct 14 09:09:36 crc kubenswrapper[5058]: I1014 09:09:36.358397 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-98ec-account-create-qs55p"] Oct 14 09:09:36 crc kubenswrapper[5058]: I1014 09:09:36.473139 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-98ec-account-create-qs55p" event={"ID":"828a2275-8a5a-4686-893f-acd3c9dbc003","Type":"ContainerStarted","Data":"cf7b738237e1eb55e55319625a9031b04b4eb351ea7473795464e995c16e6ba9"} Oct 14 09:09:37 crc kubenswrapper[5058]: I1014 09:09:37.487343 5058 generic.go:334] "Generic (PLEG): container finished" podID="828a2275-8a5a-4686-893f-acd3c9dbc003" containerID="c8700ba8770048af3795cb56a70369552aaf61b9414926fa8cabd042bb665740" exitCode=0 Oct 14 09:09:37 crc kubenswrapper[5058]: I1014 09:09:37.487571 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-98ec-account-create-qs55p" event={"ID":"828a2275-8a5a-4686-893f-acd3c9dbc003","Type":"ContainerDied","Data":"c8700ba8770048af3795cb56a70369552aaf61b9414926fa8cabd042bb665740"} Oct 14 09:09:38 crc kubenswrapper[5058]: I1014 09:09:38.998278 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-98ec-account-create-qs55p" Oct 14 09:09:39 crc kubenswrapper[5058]: I1014 09:09:39.116660 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55kdg\" (UniqueName: \"kubernetes.io/projected/828a2275-8a5a-4686-893f-acd3c9dbc003-kube-api-access-55kdg\") pod \"828a2275-8a5a-4686-893f-acd3c9dbc003\" (UID: \"828a2275-8a5a-4686-893f-acd3c9dbc003\") " Oct 14 09:09:39 crc kubenswrapper[5058]: I1014 09:09:39.126567 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828a2275-8a5a-4686-893f-acd3c9dbc003-kube-api-access-55kdg" (OuterVolumeSpecName: "kube-api-access-55kdg") pod "828a2275-8a5a-4686-893f-acd3c9dbc003" (UID: "828a2275-8a5a-4686-893f-acd3c9dbc003"). InnerVolumeSpecName "kube-api-access-55kdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:09:39 crc kubenswrapper[5058]: I1014 09:09:39.219882 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55kdg\" (UniqueName: \"kubernetes.io/projected/828a2275-8a5a-4686-893f-acd3c9dbc003-kube-api-access-55kdg\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:39 crc kubenswrapper[5058]: I1014 09:09:39.517344 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-98ec-account-create-qs55p" event={"ID":"828a2275-8a5a-4686-893f-acd3c9dbc003","Type":"ContainerDied","Data":"cf7b738237e1eb55e55319625a9031b04b4eb351ea7473795464e995c16e6ba9"} Oct 14 09:09:39 crc kubenswrapper[5058]: I1014 09:09:39.517789 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf7b738237e1eb55e55319625a9031b04b4eb351ea7473795464e995c16e6ba9" Oct 14 09:09:39 crc kubenswrapper[5058]: I1014 09:09:39.517428 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-98ec-account-create-qs55p" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.554837 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-vr6kc"] Oct 14 09:09:40 crc kubenswrapper[5058]: E1014 09:09:40.555735 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828a2275-8a5a-4686-893f-acd3c9dbc003" containerName="mariadb-account-create" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.555755 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="828a2275-8a5a-4686-893f-acd3c9dbc003" containerName="mariadb-account-create" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.556144 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="828a2275-8a5a-4686-893f-acd3c9dbc003" containerName="mariadb-account-create" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.558681 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.563358 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.564756 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-99lbk" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.570017 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-vr6kc"] Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.649211 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdps\" (UniqueName: \"kubernetes.io/projected/216627be-67d1-4945-930b-6d1062f84697-kube-api-access-2hdps\") pod \"heat-db-sync-vr6kc\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.649371 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-config-data\") pod \"heat-db-sync-vr6kc\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.649506 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-combined-ca-bundle\") pod \"heat-db-sync-vr6kc\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.751122 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-combined-ca-bundle\") pod \"heat-db-sync-vr6kc\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.751216 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdps\" (UniqueName: \"kubernetes.io/projected/216627be-67d1-4945-930b-6d1062f84697-kube-api-access-2hdps\") pod \"heat-db-sync-vr6kc\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.751412 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-config-data\") pod \"heat-db-sync-vr6kc\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.758515 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-config-data\") pod \"heat-db-sync-vr6kc\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.768703 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-combined-ca-bundle\") pod \"heat-db-sync-vr6kc\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.781701 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdps\" (UniqueName: \"kubernetes.io/projected/216627be-67d1-4945-930b-6d1062f84697-kube-api-access-2hdps\") pod \"heat-db-sync-vr6kc\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:40 crc kubenswrapper[5058]: I1014 09:09:40.884202 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:41 crc kubenswrapper[5058]: I1014 09:09:41.385083 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-vr6kc"] Oct 14 09:09:41 crc kubenswrapper[5058]: W1014 09:09:41.401571 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod216627be_67d1_4945_930b_6d1062f84697.slice/crio-35eae34e7f15ebaa013dfad85bd9af38a77c22973f8cd363e4af99546cffcb8c WatchSource:0}: Error finding container 35eae34e7f15ebaa013dfad85bd9af38a77c22973f8cd363e4af99546cffcb8c: Status 404 returned error can't find the container with id 35eae34e7f15ebaa013dfad85bd9af38a77c22973f8cd363e4af99546cffcb8c Oct 14 09:09:41 crc kubenswrapper[5058]: I1014 09:09:41.547585 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-vr6kc" event={"ID":"216627be-67d1-4945-930b-6d1062f84697","Type":"ContainerStarted","Data":"35eae34e7f15ebaa013dfad85bd9af38a77c22973f8cd363e4af99546cffcb8c"} Oct 14 09:09:43 crc kubenswrapper[5058]: I1014 09:09:43.501550 5058 scope.go:117] "RemoveContainer" containerID="4272392ea59d88f88351e0aceec5fefde5c4189a9b2cab6183b5f780dd421623" Oct 14 09:09:43 crc kubenswrapper[5058]: I1014 09:09:43.533536 5058 scope.go:117] "RemoveContainer" containerID="0fd60207141a65747939fb50726ea8db3d6173b9442934a2a1eb7bef73a2d0fb" Oct 14 09:09:46 crc kubenswrapper[5058]: I1014 09:09:46.067585 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:47 crc kubenswrapper[5058]: I1014 09:09:47.697269 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-586f669495-t2sn2" Oct 14 09:09:47 crc kubenswrapper[5058]: I1014 09:09:47.761396 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84497fcc7f-cv9tp"] Oct 14 09:09:47 crc kubenswrapper[5058]: I1014 09:09:47.761849 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84497fcc7f-cv9tp" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon-log" containerID="cri-o://60cdd7390d59d32e4f2e315c05d4b96eefd406ab2c7f3ba099fe2911df0cba45" gracePeriod=30 Oct 14 09:09:47 crc kubenswrapper[5058]: I1014 09:09:47.761968 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84497fcc7f-cv9tp" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon" containerID="cri-o://86d3514c3c6b4efc97d5d54b3ffe0c368ad4dccc85903d248ad69cd7ffcdb79f" gracePeriod=30 Oct 14 09:09:48 crc kubenswrapper[5058]: I1014 09:09:48.791047 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:09:48 crc kubenswrapper[5058]: E1014 09:09:48.792150 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:09:49 crc kubenswrapper[5058]: I1014 09:09:49.643979 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-vr6kc" event={"ID":"216627be-67d1-4945-930b-6d1062f84697","Type":"ContainerStarted","Data":"d776d486e822ca3deeeaea04d6775e6228f31daac408513050ca6c56dc31ede5"} Oct 14 09:09:49 crc kubenswrapper[5058]: I1014 09:09:49.665921 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-vr6kc" podStartSLOduration=1.806548919 podStartE2EDuration="9.66590662s" podCreationTimestamp="2025-10-14 09:09:40 +0000 UTC" firstStartedPulling="2025-10-14 09:09:41.40472519 +0000 UTC m=+8529.315808996" lastFinishedPulling="2025-10-14 09:09:49.264082891 +0000 UTC m=+8537.175166697" observedRunningTime="2025-10-14 09:09:49.663033708 +0000 UTC m=+8537.574117574" watchObservedRunningTime="2025-10-14 09:09:49.66590662 +0000 UTC m=+8537.576990426" Oct 14 09:09:51 crc kubenswrapper[5058]: I1014 09:09:51.670412 5058 generic.go:334] "Generic (PLEG): container finished" podID="216627be-67d1-4945-930b-6d1062f84697" containerID="d776d486e822ca3deeeaea04d6775e6228f31daac408513050ca6c56dc31ede5" exitCode=0 Oct 14 09:09:51 crc kubenswrapper[5058]: I1014 09:09:51.670516 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-vr6kc" event={"ID":"216627be-67d1-4945-930b-6d1062f84697","Type":"ContainerDied","Data":"d776d486e822ca3deeeaea04d6775e6228f31daac408513050ca6c56dc31ede5"} Oct 14 09:09:51 crc kubenswrapper[5058]: I1014 09:09:51.674251 5058 generic.go:334] "Generic (PLEG): container finished" podID="63447aef-1de1-4444-8065-7d46756df0ae" containerID="86d3514c3c6b4efc97d5d54b3ffe0c368ad4dccc85903d248ad69cd7ffcdb79f" exitCode=0 Oct 14 09:09:51 crc kubenswrapper[5058]: I1014 09:09:51.674288 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84497fcc7f-cv9tp" event={"ID":"63447aef-1de1-4444-8065-7d46756df0ae","Type":"ContainerDied","Data":"86d3514c3c6b4efc97d5d54b3ffe0c368ad4dccc85903d248ad69cd7ffcdb79f"} Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.159594 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.234734 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-config-data\") pod \"216627be-67d1-4945-930b-6d1062f84697\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.235183 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-combined-ca-bundle\") pod \"216627be-67d1-4945-930b-6d1062f84697\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.235293 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hdps\" (UniqueName: \"kubernetes.io/projected/216627be-67d1-4945-930b-6d1062f84697-kube-api-access-2hdps\") pod \"216627be-67d1-4945-930b-6d1062f84697\" (UID: \"216627be-67d1-4945-930b-6d1062f84697\") " Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.241727 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216627be-67d1-4945-930b-6d1062f84697-kube-api-access-2hdps" (OuterVolumeSpecName: "kube-api-access-2hdps") pod "216627be-67d1-4945-930b-6d1062f84697" (UID: "216627be-67d1-4945-930b-6d1062f84697"). InnerVolumeSpecName "kube-api-access-2hdps". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.275049 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "216627be-67d1-4945-930b-6d1062f84697" (UID: "216627be-67d1-4945-930b-6d1062f84697"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.338302 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hdps\" (UniqueName: \"kubernetes.io/projected/216627be-67d1-4945-930b-6d1062f84697-kube-api-access-2hdps\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.338349 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.352923 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-config-data" (OuterVolumeSpecName: "config-data") pod "216627be-67d1-4945-930b-6d1062f84697" (UID: "216627be-67d1-4945-930b-6d1062f84697"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.447010 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216627be-67d1-4945-930b-6d1062f84697-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.718660 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-vr6kc" event={"ID":"216627be-67d1-4945-930b-6d1062f84697","Type":"ContainerDied","Data":"35eae34e7f15ebaa013dfad85bd9af38a77c22973f8cd363e4af99546cffcb8c"} Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.718715 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35eae34e7f15ebaa013dfad85bd9af38a77c22973f8cd363e4af99546cffcb8c" Oct 14 09:09:53 crc kubenswrapper[5058]: I1014 09:09:53.718821 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-vr6kc" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.164603 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6b4df585c6-lftwf"] Oct 14 09:09:55 crc kubenswrapper[5058]: E1014 09:09:55.165432 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216627be-67d1-4945-930b-6d1062f84697" containerName="heat-db-sync" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.165451 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="216627be-67d1-4945-930b-6d1062f84697" containerName="heat-db-sync" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.165764 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="216627be-67d1-4945-930b-6d1062f84697" containerName="heat-db-sync" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.166675 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.169056 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-99lbk" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.173270 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.173649 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.183012 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6b4df585c6-lftwf"] Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.185118 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tncs\" (UniqueName: \"kubernetes.io/projected/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-kube-api-access-8tncs\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.185368 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-config-data\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.185653 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-config-data-custom\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.187745 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-combined-ca-bundle\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.291815 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-config-data\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.291922 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-config-data-custom\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.292011 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-combined-ca-bundle\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.292037 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tncs\" (UniqueName: \"kubernetes.io/projected/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-kube-api-access-8tncs\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.302201 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-config-data\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.309238 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-combined-ca-bundle\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.349110 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5584c58558-xmfqb"] Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.350348 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.352541 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.355708 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-config-data-custom\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.373556 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5584c58558-xmfqb"] Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.379789 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tncs\" (UniqueName: \"kubernetes.io/projected/c1f909eb-bccb-4af2-ac36-13b1f00e3a33-kube-api-access-8tncs\") pod \"heat-engine-6b4df585c6-lftwf\" (UID: \"c1f909eb-bccb-4af2-ac36-13b1f00e3a33\") " pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.393260 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6661264-a713-43c4-89da-c3c07c28799b-config-data\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.393335 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6661264-a713-43c4-89da-c3c07c28799b-combined-ca-bundle\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.393414 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcpz\" (UniqueName: \"kubernetes.io/projected/e6661264-a713-43c4-89da-c3c07c28799b-kube-api-access-6mcpz\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.393430 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6661264-a713-43c4-89da-c3c07c28799b-config-data-custom\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.393631 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d84d4c977-rr8kv"] Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.395686 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.397758 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.408254 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d84d4c977-rr8kv"] Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.494764 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b249517c-6c1a-4820-8f44-428350239048-combined-ca-bundle\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.494850 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv57c\" (UniqueName: \"kubernetes.io/projected/b249517c-6c1a-4820-8f44-428350239048-kube-api-access-lv57c\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.494880 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6661264-a713-43c4-89da-c3c07c28799b-config-data\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.494932 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6661264-a713-43c4-89da-c3c07c28799b-combined-ca-bundle\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.494984 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b249517c-6c1a-4820-8f44-428350239048-config-data-custom\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.495034 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcpz\" (UniqueName: \"kubernetes.io/projected/e6661264-a713-43c4-89da-c3c07c28799b-kube-api-access-6mcpz\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.495054 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6661264-a713-43c4-89da-c3c07c28799b-config-data-custom\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.495077 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b249517c-6c1a-4820-8f44-428350239048-config-data\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.498460 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.500168 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6661264-a713-43c4-89da-c3c07c28799b-config-data\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.501476 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6661264-a713-43c4-89da-c3c07c28799b-config-data-custom\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.512824 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcpz\" (UniqueName: \"kubernetes.io/projected/e6661264-a713-43c4-89da-c3c07c28799b-kube-api-access-6mcpz\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.512839 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6661264-a713-43c4-89da-c3c07c28799b-combined-ca-bundle\") pod \"heat-api-5584c58558-xmfqb\" (UID: \"e6661264-a713-43c4-89da-c3c07c28799b\") " pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.597123 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b249517c-6c1a-4820-8f44-428350239048-config-data-custom\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.597195 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b249517c-6c1a-4820-8f44-428350239048-config-data\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.597224 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b249517c-6c1a-4820-8f44-428350239048-combined-ca-bundle\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.597270 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv57c\" (UniqueName: \"kubernetes.io/projected/b249517c-6c1a-4820-8f44-428350239048-kube-api-access-lv57c\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.601269 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b249517c-6c1a-4820-8f44-428350239048-config-data-custom\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.602413 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b249517c-6c1a-4820-8f44-428350239048-config-data\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.606466 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b249517c-6c1a-4820-8f44-428350239048-combined-ca-bundle\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.613925 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv57c\" (UniqueName: \"kubernetes.io/projected/b249517c-6c1a-4820-8f44-428350239048-kube-api-access-lv57c\") pod \"heat-cfnapi-d84d4c977-rr8kv\" (UID: \"b249517c-6c1a-4820-8f44-428350239048\") " pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.742391 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:55 crc kubenswrapper[5058]: I1014 09:09:55.747893 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:56 crc kubenswrapper[5058]: I1014 09:09:56.022810 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6b4df585c6-lftwf"] Oct 14 09:09:56 crc kubenswrapper[5058]: I1014 09:09:56.299932 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d84d4c977-rr8kv"] Oct 14 09:09:56 crc kubenswrapper[5058]: W1014 09:09:56.307523 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb249517c_6c1a_4820_8f44_428350239048.slice/crio-a5dd3c5676fbea77204219b451edfb08d309a7e2158f77afece44a3304b38588 WatchSource:0}: Error finding container a5dd3c5676fbea77204219b451edfb08d309a7e2158f77afece44a3304b38588: Status 404 returned error can't find the container with id a5dd3c5676fbea77204219b451edfb08d309a7e2158f77afece44a3304b38588 Oct 14 09:09:56 crc kubenswrapper[5058]: I1014 09:09:56.430563 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5584c58558-xmfqb"] Oct 14 09:09:56 crc kubenswrapper[5058]: W1014 09:09:56.439494 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6661264_a713_43c4_89da_c3c07c28799b.slice/crio-1fa28238b899dc1c575216ed9a4f789b0d14c0ce980803b7cb62a468cb387194 WatchSource:0}: Error finding container 1fa28238b899dc1c575216ed9a4f789b0d14c0ce980803b7cb62a468cb387194: Status 404 returned error can't find the container with id 1fa28238b899dc1c575216ed9a4f789b0d14c0ce980803b7cb62a468cb387194 Oct 14 09:09:56 crc kubenswrapper[5058]: I1014 09:09:56.766562 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5584c58558-xmfqb" event={"ID":"e6661264-a713-43c4-89da-c3c07c28799b","Type":"ContainerStarted","Data":"1fa28238b899dc1c575216ed9a4f789b0d14c0ce980803b7cb62a468cb387194"} Oct 14 09:09:56 crc kubenswrapper[5058]: I1014 09:09:56.769609 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6b4df585c6-lftwf" event={"ID":"c1f909eb-bccb-4af2-ac36-13b1f00e3a33","Type":"ContainerStarted","Data":"6b5d57e4ca6b9c4fc7ca0e5ceaed0a6d4d1579fdb53010389cb6b373cde8524e"} Oct 14 09:09:56 crc kubenswrapper[5058]: I1014 09:09:56.769637 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6b4df585c6-lftwf" event={"ID":"c1f909eb-bccb-4af2-ac36-13b1f00e3a33","Type":"ContainerStarted","Data":"acbc3d9575bc013e52f7cb1ec9f8442fe93abef5d03229a28fdbc55f0f543fc3"} Oct 14 09:09:56 crc kubenswrapper[5058]: I1014 09:09:56.769733 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:09:56 crc kubenswrapper[5058]: I1014 09:09:56.771634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d84d4c977-rr8kv" event={"ID":"b249517c-6c1a-4820-8f44-428350239048","Type":"ContainerStarted","Data":"a5dd3c5676fbea77204219b451edfb08d309a7e2158f77afece44a3304b38588"} Oct 14 09:09:56 crc kubenswrapper[5058]: I1014 09:09:56.794078 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6b4df585c6-lftwf" podStartSLOduration=1.794056484 podStartE2EDuration="1.794056484s" podCreationTimestamp="2025-10-14 09:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:09:56.786293852 +0000 UTC m=+8544.697377678" watchObservedRunningTime="2025-10-14 09:09:56.794056484 +0000 UTC m=+8544.705140300" Oct 14 09:09:57 crc kubenswrapper[5058]: I1014 09:09:57.342521 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84497fcc7f-cv9tp" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.141:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.141:8080: connect: connection refused" Oct 14 09:09:58 crc kubenswrapper[5058]: I1014 09:09:58.049913 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-92rfb"] Oct 14 09:09:58 crc kubenswrapper[5058]: I1014 09:09:58.061618 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-92rfb"] Oct 14 09:09:58 crc kubenswrapper[5058]: I1014 09:09:58.801105 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a4c6ec-719f-48a2-b490-3883988e0a82" path="/var/lib/kubelet/pods/20a4c6ec-719f-48a2-b490-3883988e0a82/volumes" Oct 14 09:09:58 crc kubenswrapper[5058]: I1014 09:09:58.801841 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5584c58558-xmfqb" event={"ID":"e6661264-a713-43c4-89da-c3c07c28799b","Type":"ContainerStarted","Data":"dc38db50b0cce9b733bb52099d6311dfa21fe017edc3c747c542bd21e6daee6b"} Oct 14 09:09:58 crc kubenswrapper[5058]: I1014 09:09:58.801887 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:09:58 crc kubenswrapper[5058]: I1014 09:09:58.801905 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:09:58 crc kubenswrapper[5058]: I1014 09:09:58.801917 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d84d4c977-rr8kv" event={"ID":"b249517c-6c1a-4820-8f44-428350239048","Type":"ContainerStarted","Data":"cc02ba2e4e49cb8a01cbdf50291c979861be0a72ad668653ad742e5988572ecf"} Oct 14 09:09:58 crc kubenswrapper[5058]: I1014 09:09:58.810894 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5584c58558-xmfqb" podStartSLOduration=2.456078743 podStartE2EDuration="3.810872309s" podCreationTimestamp="2025-10-14 09:09:55 +0000 UTC" firstStartedPulling="2025-10-14 09:09:56.442251936 +0000 UTC m=+8544.353335742" lastFinishedPulling="2025-10-14 09:09:57.797045502 +0000 UTC m=+8545.708129308" observedRunningTime="2025-10-14 09:09:58.807712418 +0000 UTC m=+8546.718796244" watchObservedRunningTime="2025-10-14 09:09:58.810872309 +0000 UTC m=+8546.721956115" Oct 14 09:09:58 crc kubenswrapper[5058]: I1014 09:09:58.823085 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d84d4c977-rr8kv" podStartSLOduration=2.336683589 podStartE2EDuration="3.823067487s" podCreationTimestamp="2025-10-14 09:09:55 +0000 UTC" firstStartedPulling="2025-10-14 09:09:56.309465769 +0000 UTC m=+8544.220549575" lastFinishedPulling="2025-10-14 09:09:57.795849667 +0000 UTC m=+8545.706933473" observedRunningTime="2025-10-14 09:09:58.82139642 +0000 UTC m=+8546.732480226" watchObservedRunningTime="2025-10-14 09:09:58.823067487 +0000 UTC m=+8546.734151293" Oct 14 09:10:03 crc kubenswrapper[5058]: I1014 09:10:03.791326 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:10:03 crc kubenswrapper[5058]: E1014 09:10:03.796391 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:10:07 crc kubenswrapper[5058]: I1014 09:10:07.189646 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5584c58558-xmfqb" Oct 14 09:10:07 crc kubenswrapper[5058]: I1014 09:10:07.342368 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84497fcc7f-cv9tp" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.141:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.141:8080: connect: connection refused" Oct 14 09:10:07 crc kubenswrapper[5058]: I1014 09:10:07.386609 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-d84d4c977-rr8kv" Oct 14 09:10:08 crc kubenswrapper[5058]: I1014 09:10:08.039066 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8bb9-account-create-f48gz"] Oct 14 09:10:08 crc kubenswrapper[5058]: I1014 09:10:08.052081 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8bb9-account-create-f48gz"] Oct 14 09:10:08 crc kubenswrapper[5058]: I1014 09:10:08.810855 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec9b5eb-4e30-4901-bf63-4a4cbd0307cc" path="/var/lib/kubelet/pods/aec9b5eb-4e30-4901-bf63-4a4cbd0307cc/volumes" Oct 14 09:10:15 crc kubenswrapper[5058]: I1014 09:10:15.539665 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6b4df585c6-lftwf" Oct 14 09:10:16 crc kubenswrapper[5058]: I1014 09:10:16.790695 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:10:16 crc kubenswrapper[5058]: E1014 09:10:16.791285 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:10:17 crc kubenswrapper[5058]: I1014 09:10:17.342452 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84497fcc7f-cv9tp" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.141:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.141:8080: connect: connection refused" Oct 14 09:10:17 crc kubenswrapper[5058]: I1014 09:10:17.342564 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.035831 5058 generic.go:334] "Generic (PLEG): container finished" podID="63447aef-1de1-4444-8065-7d46756df0ae" containerID="60cdd7390d59d32e4f2e315c05d4b96eefd406ab2c7f3ba099fe2911df0cba45" exitCode=137 Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.035904 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84497fcc7f-cv9tp" event={"ID":"63447aef-1de1-4444-8065-7d46756df0ae","Type":"ContainerDied","Data":"60cdd7390d59d32e4f2e315c05d4b96eefd406ab2c7f3ba099fe2911df0cba45"} Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.270263 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.401159 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgmm8\" (UniqueName: \"kubernetes.io/projected/63447aef-1de1-4444-8065-7d46756df0ae-kube-api-access-fgmm8\") pod \"63447aef-1de1-4444-8065-7d46756df0ae\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.401300 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-config-data\") pod \"63447aef-1de1-4444-8065-7d46756df0ae\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.401397 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63447aef-1de1-4444-8065-7d46756df0ae-logs\") pod \"63447aef-1de1-4444-8065-7d46756df0ae\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.401441 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-scripts\") pod \"63447aef-1de1-4444-8065-7d46756df0ae\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.401606 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63447aef-1de1-4444-8065-7d46756df0ae-horizon-secret-key\") pod \"63447aef-1de1-4444-8065-7d46756df0ae\" (UID: \"63447aef-1de1-4444-8065-7d46756df0ae\") " Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.402645 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63447aef-1de1-4444-8065-7d46756df0ae-logs" (OuterVolumeSpecName: "logs") pod "63447aef-1de1-4444-8065-7d46756df0ae" (UID: "63447aef-1de1-4444-8065-7d46756df0ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.408297 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63447aef-1de1-4444-8065-7d46756df0ae-kube-api-access-fgmm8" (OuterVolumeSpecName: "kube-api-access-fgmm8") pod "63447aef-1de1-4444-8065-7d46756df0ae" (UID: "63447aef-1de1-4444-8065-7d46756df0ae"). InnerVolumeSpecName "kube-api-access-fgmm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.410002 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63447aef-1de1-4444-8065-7d46756df0ae-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "63447aef-1de1-4444-8065-7d46756df0ae" (UID: "63447aef-1de1-4444-8065-7d46756df0ae"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.436788 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-config-data" (OuterVolumeSpecName: "config-data") pod "63447aef-1de1-4444-8065-7d46756df0ae" (UID: "63447aef-1de1-4444-8065-7d46756df0ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.454897 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-scripts" (OuterVolumeSpecName: "scripts") pod "63447aef-1de1-4444-8065-7d46756df0ae" (UID: "63447aef-1de1-4444-8065-7d46756df0ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.504210 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgmm8\" (UniqueName: \"kubernetes.io/projected/63447aef-1de1-4444-8065-7d46756df0ae-kube-api-access-fgmm8\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.504262 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.504281 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63447aef-1de1-4444-8065-7d46756df0ae-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.504300 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63447aef-1de1-4444-8065-7d46756df0ae-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:18 crc kubenswrapper[5058]: I1014 09:10:18.504316 5058 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63447aef-1de1-4444-8065-7d46756df0ae-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:19 crc kubenswrapper[5058]: I1014 09:10:19.055617 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84497fcc7f-cv9tp" event={"ID":"63447aef-1de1-4444-8065-7d46756df0ae","Type":"ContainerDied","Data":"5bbdf303d52a6e8b5cb8ebbaaa39422e756dd2dc6ceb675606b6525566ba5f55"} Oct 14 09:10:19 crc kubenswrapper[5058]: I1014 09:10:19.055696 5058 scope.go:117] "RemoveContainer" containerID="86d3514c3c6b4efc97d5d54b3ffe0c368ad4dccc85903d248ad69cd7ffcdb79f" Oct 14 09:10:19 crc kubenswrapper[5058]: I1014 09:10:19.056676 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84497fcc7f-cv9tp" Oct 14 09:10:19 crc kubenswrapper[5058]: I1014 09:10:19.112784 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84497fcc7f-cv9tp"] Oct 14 09:10:19 crc kubenswrapper[5058]: I1014 09:10:19.130707 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84497fcc7f-cv9tp"] Oct 14 09:10:19 crc kubenswrapper[5058]: I1014 09:10:19.344944 5058 scope.go:117] "RemoveContainer" containerID="60cdd7390d59d32e4f2e315c05d4b96eefd406ab2c7f3ba099fe2911df0cba45" Oct 14 09:10:20 crc kubenswrapper[5058]: I1014 09:10:20.040041 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-q5qzw"] Oct 14 09:10:20 crc kubenswrapper[5058]: I1014 09:10:20.053521 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-q5qzw"] Oct 14 09:10:20 crc kubenswrapper[5058]: I1014 09:10:20.814791 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63447aef-1de1-4444-8065-7d46756df0ae" path="/var/lib/kubelet/pods/63447aef-1de1-4444-8065-7d46756df0ae/volumes" Oct 14 09:10:20 crc kubenswrapper[5058]: I1014 09:10:20.816266 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79fbca42-6fb5-4dab-8f00-8d466b10974c" path="/var/lib/kubelet/pods/79fbca42-6fb5-4dab-8f00-8d466b10974c/volumes" Oct 14 09:10:24 crc kubenswrapper[5058]: I1014 09:10:24.876635 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx"] Oct 14 09:10:24 crc kubenswrapper[5058]: E1014 09:10:24.877770 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon" Oct 14 09:10:24 crc kubenswrapper[5058]: I1014 09:10:24.877829 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon" Oct 14 09:10:24 crc kubenswrapper[5058]: E1014 09:10:24.877861 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon-log" Oct 14 09:10:24 crc kubenswrapper[5058]: I1014 09:10:24.877873 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon-log" Oct 14 09:10:24 crc kubenswrapper[5058]: I1014 09:10:24.878249 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon-log" Oct 14 09:10:24 crc kubenswrapper[5058]: I1014 09:10:24.878283 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="63447aef-1de1-4444-8065-7d46756df0ae" containerName="horizon" Oct 14 09:10:24 crc kubenswrapper[5058]: I1014 09:10:24.880925 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:24 crc kubenswrapper[5058]: I1014 09:10:24.884276 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 09:10:24 crc kubenswrapper[5058]: I1014 09:10:24.896707 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx"] Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.067730 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mjqb\" (UniqueName: \"kubernetes.io/projected/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-kube-api-access-9mjqb\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.068233 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.068321 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.170604 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mjqb\" (UniqueName: \"kubernetes.io/projected/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-kube-api-access-9mjqb\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.170711 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.170907 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.171366 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.171496 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.192117 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mjqb\" (UniqueName: \"kubernetes.io/projected/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-kube-api-access-9mjqb\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.205685 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:25 crc kubenswrapper[5058]: I1014 09:10:25.749284 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx"] Oct 14 09:10:25 crc kubenswrapper[5058]: W1014 09:10:25.750022 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d91c6c2_14e3_4d18_b53b_55dd5000d1ad.slice/crio-d021a59fd9f9fbee61a4f5517d22df6decedc19c3630ecf5118ecce36dd3691f WatchSource:0}: Error finding container d021a59fd9f9fbee61a4f5517d22df6decedc19c3630ecf5118ecce36dd3691f: Status 404 returned error can't find the container with id d021a59fd9f9fbee61a4f5517d22df6decedc19c3630ecf5118ecce36dd3691f Oct 14 09:10:26 crc kubenswrapper[5058]: I1014 09:10:26.145191 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" event={"ID":"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad","Type":"ContainerStarted","Data":"1ba3a858b4c9eeeb1e521d2cb090ba40178054ec978af8ea21365229e7efbc20"} Oct 14 09:10:26 crc kubenswrapper[5058]: I1014 09:10:26.145653 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" event={"ID":"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad","Type":"ContainerStarted","Data":"d021a59fd9f9fbee61a4f5517d22df6decedc19c3630ecf5118ecce36dd3691f"} Oct 14 09:10:27 crc kubenswrapper[5058]: I1014 09:10:27.159961 5058 generic.go:334] "Generic (PLEG): container finished" podID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerID="1ba3a858b4c9eeeb1e521d2cb090ba40178054ec978af8ea21365229e7efbc20" exitCode=0 Oct 14 09:10:27 crc kubenswrapper[5058]: I1014 09:10:27.160004 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" event={"ID":"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad","Type":"ContainerDied","Data":"1ba3a858b4c9eeeb1e521d2cb090ba40178054ec978af8ea21365229e7efbc20"} Oct 14 09:10:29 crc kubenswrapper[5058]: I1014 09:10:29.791020 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:10:29 crc kubenswrapper[5058]: E1014 09:10:29.792016 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:10:30 crc kubenswrapper[5058]: I1014 09:10:30.200489 5058 generic.go:334] "Generic (PLEG): container finished" podID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerID="4774421666ae481d72f8e1775eacfc083ca8162518a91cb3e6b74a6b003d4ba8" exitCode=0 Oct 14 09:10:30 crc kubenswrapper[5058]: I1014 09:10:30.200569 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" event={"ID":"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad","Type":"ContainerDied","Data":"4774421666ae481d72f8e1775eacfc083ca8162518a91cb3e6b74a6b003d4ba8"} Oct 14 09:10:31 crc kubenswrapper[5058]: I1014 09:10:31.220072 5058 generic.go:334] "Generic (PLEG): container finished" podID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerID="79a60dfd5eaf3e23c7468e7ca928e5e3457fb636cfe526618713e4118bcde293" exitCode=0 Oct 14 09:10:31 crc kubenswrapper[5058]: I1014 09:10:31.220128 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" event={"ID":"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad","Type":"ContainerDied","Data":"79a60dfd5eaf3e23c7468e7ca928e5e3457fb636cfe526618713e4118bcde293"} Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.719548 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.875855 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mjqb\" (UniqueName: \"kubernetes.io/projected/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-kube-api-access-9mjqb\") pod \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.875929 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-util\") pod \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.875989 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-bundle\") pod \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\" (UID: \"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad\") " Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.880674 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-bundle" (OuterVolumeSpecName: "bundle") pod "9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" (UID: "9d91c6c2-14e3-4d18-b53b-55dd5000d1ad"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.882018 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-kube-api-access-9mjqb" (OuterVolumeSpecName: "kube-api-access-9mjqb") pod "9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" (UID: "9d91c6c2-14e3-4d18-b53b-55dd5000d1ad"). InnerVolumeSpecName "kube-api-access-9mjqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.888177 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-util" (OuterVolumeSpecName: "util") pod "9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" (UID: "9d91c6c2-14e3-4d18-b53b-55dd5000d1ad"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.981326 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mjqb\" (UniqueName: \"kubernetes.io/projected/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-kube-api-access-9mjqb\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.982192 5058 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-util\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:32 crc kubenswrapper[5058]: I1014 09:10:32.982221 5058 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d91c6c2-14e3-4d18-b53b-55dd5000d1ad-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:33 crc kubenswrapper[5058]: I1014 09:10:33.251663 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" event={"ID":"9d91c6c2-14e3-4d18-b53b-55dd5000d1ad","Type":"ContainerDied","Data":"d021a59fd9f9fbee61a4f5517d22df6decedc19c3630ecf5118ecce36dd3691f"} Oct 14 09:10:33 crc kubenswrapper[5058]: I1014 09:10:33.251723 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d021a59fd9f9fbee61a4f5517d22df6decedc19c3630ecf5118ecce36dd3691f" Oct 14 09:10:33 crc kubenswrapper[5058]: I1014 09:10:33.251761 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.754222 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-2php4"] Oct 14 09:10:42 crc kubenswrapper[5058]: E1014 09:10:42.756570 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerName="util" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.756597 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerName="util" Oct 14 09:10:42 crc kubenswrapper[5058]: E1014 09:10:42.756645 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerName="pull" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.756653 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerName="pull" Oct 14 09:10:42 crc kubenswrapper[5058]: E1014 09:10:42.756672 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerName="extract" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.756682 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerName="extract" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.756976 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d91c6c2-14e3-4d18-b53b-55dd5000d1ad" containerName="extract" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.757855 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-2php4" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.759928 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-dkgm9" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.760231 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.760291 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.767713 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-2php4"] Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.782063 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226rp\" (UniqueName: \"kubernetes.io/projected/4ce4b8f9-59ef-47d1-bdeb-bca99787f5a3-kube-api-access-226rp\") pod \"obo-prometheus-operator-7c8cf85677-2php4\" (UID: \"4ce4b8f9-59ef-47d1-bdeb-bca99787f5a3\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-2php4" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.878672 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf"] Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.880031 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.883340 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/641c8e0a-5556-4b93-b7ae-9a6e428509fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf\" (UID: \"641c8e0a-5556-4b93-b7ae-9a6e428509fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.883395 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-226rp\" (UniqueName: \"kubernetes.io/projected/4ce4b8f9-59ef-47d1-bdeb-bca99787f5a3-kube-api-access-226rp\") pod \"obo-prometheus-operator-7c8cf85677-2php4\" (UID: \"4ce4b8f9-59ef-47d1-bdeb-bca99787f5a3\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-2php4" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.883452 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/641c8e0a-5556-4b93-b7ae-9a6e428509fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf\" (UID: \"641c8e0a-5556-4b93-b7ae-9a6e428509fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.886723 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.887283 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-zsgql" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.894400 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf"] Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.913890 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4"] Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.915333 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.932595 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-226rp\" (UniqueName: \"kubernetes.io/projected/4ce4b8f9-59ef-47d1-bdeb-bca99787f5a3-kube-api-access-226rp\") pod \"obo-prometheus-operator-7c8cf85677-2php4\" (UID: \"4ce4b8f9-59ef-47d1-bdeb-bca99787f5a3\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-2php4" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.936747 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4"] Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.989217 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/641c8e0a-5556-4b93-b7ae-9a6e428509fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf\" (UID: \"641c8e0a-5556-4b93-b7ae-9a6e428509fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.989311 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/641c8e0a-5556-4b93-b7ae-9a6e428509fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf\" (UID: \"641c8e0a-5556-4b93-b7ae-9a6e428509fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.993618 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/641c8e0a-5556-4b93-b7ae-9a6e428509fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf\" (UID: \"641c8e0a-5556-4b93-b7ae-9a6e428509fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.994453 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/641c8e0a-5556-4b93-b7ae-9a6e428509fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf\" (UID: \"641c8e0a-5556-4b93-b7ae-9a6e428509fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.996064 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-f825p"] Oct 14 09:10:42 crc kubenswrapper[5058]: I1014 09:10:42.997468 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.002887 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.004490 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-hrkbx" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.051431 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-f825p"] Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.082289 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-2php4" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.099691 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/96bdd361-3872-4153-a2c0-53fda217a865-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4\" (UID: \"96bdd361-3872-4153-a2c0-53fda217a865\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.099784 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjr2\" (UniqueName: \"kubernetes.io/projected/fd3e9445-621b-4747-8bb6-25e5c8587b1e-kube-api-access-qxjr2\") pod \"observability-operator-cc5f78dfc-f825p\" (UID: \"fd3e9445-621b-4747-8bb6-25e5c8587b1e\") " pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.099868 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd3e9445-621b-4747-8bb6-25e5c8587b1e-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-f825p\" (UID: \"fd3e9445-621b-4747-8bb6-25e5c8587b1e\") " pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.099908 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/96bdd361-3872-4153-a2c0-53fda217a865-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4\" (UID: \"96bdd361-3872-4153-a2c0-53fda217a865\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.177126 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-4zhxb"] Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.182482 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.184549 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-sllrj" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.195949 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.201456 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd3e9445-621b-4747-8bb6-25e5c8587b1e-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-f825p\" (UID: \"fd3e9445-621b-4747-8bb6-25e5c8587b1e\") " pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.201501 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/96bdd361-3872-4153-a2c0-53fda217a865-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4\" (UID: \"96bdd361-3872-4153-a2c0-53fda217a865\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.201531 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhd5t\" (UniqueName: \"kubernetes.io/projected/863bdd17-8f39-44bc-84d6-2a6d80b685c7-kube-api-access-hhd5t\") pod \"perses-operator-54bc95c9fb-4zhxb\" (UID: \"863bdd17-8f39-44bc-84d6-2a6d80b685c7\") " pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.201566 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/863bdd17-8f39-44bc-84d6-2a6d80b685c7-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-4zhxb\" (UID: \"863bdd17-8f39-44bc-84d6-2a6d80b685c7\") " pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.201613 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/96bdd361-3872-4153-a2c0-53fda217a865-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4\" (UID: \"96bdd361-3872-4153-a2c0-53fda217a865\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.201669 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjr2\" (UniqueName: \"kubernetes.io/projected/fd3e9445-621b-4747-8bb6-25e5c8587b1e-kube-api-access-qxjr2\") pod \"observability-operator-cc5f78dfc-f825p\" (UID: \"fd3e9445-621b-4747-8bb6-25e5c8587b1e\") " pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.213393 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/96bdd361-3872-4153-a2c0-53fda217a865-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4\" (UID: \"96bdd361-3872-4153-a2c0-53fda217a865\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.215287 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/96bdd361-3872-4153-a2c0-53fda217a865-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4\" (UID: \"96bdd361-3872-4153-a2c0-53fda217a865\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.216325 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd3e9445-621b-4747-8bb6-25e5c8587b1e-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-f825p\" (UID: \"fd3e9445-621b-4747-8bb6-25e5c8587b1e\") " pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.239030 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjr2\" (UniqueName: \"kubernetes.io/projected/fd3e9445-621b-4747-8bb6-25e5c8587b1e-kube-api-access-qxjr2\") pod \"observability-operator-cc5f78dfc-f825p\" (UID: \"fd3e9445-621b-4747-8bb6-25e5c8587b1e\") " pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.268702 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-4zhxb"] Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.292638 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.302941 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhd5t\" (UniqueName: \"kubernetes.io/projected/863bdd17-8f39-44bc-84d6-2a6d80b685c7-kube-api-access-hhd5t\") pod \"perses-operator-54bc95c9fb-4zhxb\" (UID: \"863bdd17-8f39-44bc-84d6-2a6d80b685c7\") " pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.303018 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/863bdd17-8f39-44bc-84d6-2a6d80b685c7-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-4zhxb\" (UID: \"863bdd17-8f39-44bc-84d6-2a6d80b685c7\") " pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.303995 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/863bdd17-8f39-44bc-84d6-2a6d80b685c7-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-4zhxb\" (UID: \"863bdd17-8f39-44bc-84d6-2a6d80b685c7\") " pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.323560 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhd5t\" (UniqueName: \"kubernetes.io/projected/863bdd17-8f39-44bc-84d6-2a6d80b685c7-kube-api-access-hhd5t\") pod \"perses-operator-54bc95c9fb-4zhxb\" (UID: \"863bdd17-8f39-44bc-84d6-2a6d80b685c7\") " pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.485271 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.499089 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.655187 5058 scope.go:117] "RemoveContainer" containerID="026cc49e2d48700a17830e95235358399edc9f659ed2483bd2d04eeaeceabb3a" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.742002 5058 scope.go:117] "RemoveContainer" containerID="5ead1de5653c9deaa4d6d21dc5d9db0e8320dc43b22fdd379d241528bb7b86cd" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.775088 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-2php4"] Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.790515 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:10:43 crc kubenswrapper[5058]: E1014 09:10:43.790706 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.820865 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf"] Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.832995 5058 scope.go:117] "RemoveContainer" containerID="236993a75125ebdfbb21fc3943e8c7d4c26c2f8a27333594a32b89b217274573" Oct 14 09:10:43 crc kubenswrapper[5058]: I1014 09:10:43.881140 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4"] Oct 14 09:10:44 crc kubenswrapper[5058]: I1014 09:10:44.046107 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dkzb5"] Oct 14 09:10:44 crc kubenswrapper[5058]: I1014 09:10:44.061353 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dkzb5"] Oct 14 09:10:44 crc kubenswrapper[5058]: I1014 09:10:44.335371 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-f825p"] Oct 14 09:10:44 crc kubenswrapper[5058]: W1014 09:10:44.335746 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd3e9445_621b_4747_8bb6_25e5c8587b1e.slice/crio-ff882aa73b6272fd2996d161cc50449f26f9b0f35ec3ef7421aac34541ee6dd5 WatchSource:0}: Error finding container ff882aa73b6272fd2996d161cc50449f26f9b0f35ec3ef7421aac34541ee6dd5: Status 404 returned error can't find the container with id ff882aa73b6272fd2996d161cc50449f26f9b0f35ec3ef7421aac34541ee6dd5 Oct 14 09:10:44 crc kubenswrapper[5058]: I1014 09:10:44.417751 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-4zhxb"] Oct 14 09:10:44 crc kubenswrapper[5058]: W1014 09:10:44.420657 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod863bdd17_8f39_44bc_84d6_2a6d80b685c7.slice/crio-e90c651d080748afec5340c056df87d08dc31012dc8b3b0173619dd45054ab43 WatchSource:0}: Error finding container e90c651d080748afec5340c056df87d08dc31012dc8b3b0173619dd45054ab43: Status 404 returned error can't find the container with id e90c651d080748afec5340c056df87d08dc31012dc8b3b0173619dd45054ab43 Oct 14 09:10:44 crc kubenswrapper[5058]: I1014 09:10:44.422822 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-f825p" event={"ID":"fd3e9445-621b-4747-8bb6-25e5c8587b1e","Type":"ContainerStarted","Data":"ff882aa73b6272fd2996d161cc50449f26f9b0f35ec3ef7421aac34541ee6dd5"} Oct 14 09:10:44 crc kubenswrapper[5058]: I1014 09:10:44.443351 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" event={"ID":"96bdd361-3872-4153-a2c0-53fda217a865","Type":"ContainerStarted","Data":"be3cc73534e9200c3987c3248c79255e0f95404ec3c5fbd0310d01846b08e3e6"} Oct 14 09:10:44 crc kubenswrapper[5058]: I1014 09:10:44.446964 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" event={"ID":"641c8e0a-5556-4b93-b7ae-9a6e428509fd","Type":"ContainerStarted","Data":"a14acfcf00ada6095c596b6c3eb229f089ef8b16c3807bfee4e5a9f5c85d4232"} Oct 14 09:10:44 crc kubenswrapper[5058]: I1014 09:10:44.451217 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-2php4" event={"ID":"4ce4b8f9-59ef-47d1-bdeb-bca99787f5a3","Type":"ContainerStarted","Data":"a99ac536187e55217bd164750e8e3493f7c667afcec880db3437dd7172da8bf3"} Oct 14 09:10:44 crc kubenswrapper[5058]: I1014 09:10:44.801305 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9fbd0f-c4b6-45db-b0d8-e0240f276834" path="/var/lib/kubelet/pods/1a9fbd0f-c4b6-45db-b0d8-e0240f276834/volumes" Oct 14 09:10:45 crc kubenswrapper[5058]: I1014 09:10:45.463950 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" event={"ID":"863bdd17-8f39-44bc-84d6-2a6d80b685c7","Type":"ContainerStarted","Data":"e90c651d080748afec5340c056df87d08dc31012dc8b3b0173619dd45054ab43"} Oct 14 09:10:48 crc kubenswrapper[5058]: I1014 09:10:48.497039 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" event={"ID":"863bdd17-8f39-44bc-84d6-2a6d80b685c7","Type":"ContainerStarted","Data":"4eb92286ab43455483f70edf4ae995c884e28b7c6de8a7626d29511040488d58"} Oct 14 09:10:48 crc kubenswrapper[5058]: I1014 09:10:48.497525 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:48 crc kubenswrapper[5058]: I1014 09:10:48.499733 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" event={"ID":"96bdd361-3872-4153-a2c0-53fda217a865","Type":"ContainerStarted","Data":"6b04110dc23f5ba48caed35be89233af5f39af50964c7d35a41fb8359ee43241"} Oct 14 09:10:48 crc kubenswrapper[5058]: I1014 09:10:48.503055 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" event={"ID":"641c8e0a-5556-4b93-b7ae-9a6e428509fd","Type":"ContainerStarted","Data":"ecf0afaa87ef6e76faf20966d18c44df7eb88dd0279abffcffc7d57d8e4def85"} Oct 14 09:10:48 crc kubenswrapper[5058]: I1014 09:10:48.535571 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4" podStartSLOduration=3.203564883 podStartE2EDuration="6.53555208s" podCreationTimestamp="2025-10-14 09:10:42 +0000 UTC" firstStartedPulling="2025-10-14 09:10:43.894117024 +0000 UTC m=+8591.805200830" lastFinishedPulling="2025-10-14 09:10:47.226104221 +0000 UTC m=+8595.137188027" observedRunningTime="2025-10-14 09:10:48.534244173 +0000 UTC m=+8596.445327989" watchObservedRunningTime="2025-10-14 09:10:48.53555208 +0000 UTC m=+8596.446635886" Oct 14 09:10:48 crc kubenswrapper[5058]: I1014 09:10:48.537140 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" podStartSLOduration=2.745406176 podStartE2EDuration="5.537124025s" podCreationTimestamp="2025-10-14 09:10:43 +0000 UTC" firstStartedPulling="2025-10-14 09:10:44.442514514 +0000 UTC m=+8592.353598330" lastFinishedPulling="2025-10-14 09:10:47.234232373 +0000 UTC m=+8595.145316179" observedRunningTime="2025-10-14 09:10:48.513701206 +0000 UTC m=+8596.424785012" watchObservedRunningTime="2025-10-14 09:10:48.537124025 +0000 UTC m=+8596.448207831" Oct 14 09:10:48 crc kubenswrapper[5058]: I1014 09:10:48.556527 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf" podStartSLOduration=3.208193467 podStartE2EDuration="6.55650794s" podCreationTimestamp="2025-10-14 09:10:42 +0000 UTC" firstStartedPulling="2025-10-14 09:10:43.88556714 +0000 UTC m=+8591.796650946" lastFinishedPulling="2025-10-14 09:10:47.233881613 +0000 UTC m=+8595.144965419" observedRunningTime="2025-10-14 09:10:48.55407522 +0000 UTC m=+8596.465159036" watchObservedRunningTime="2025-10-14 09:10:48.55650794 +0000 UTC m=+8596.467591746" Oct 14 09:10:53 crc kubenswrapper[5058]: I1014 09:10:53.509829 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-54bc95c9fb-4zhxb" Oct 14 09:10:53 crc kubenswrapper[5058]: I1014 09:10:53.570200 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-2php4" event={"ID":"4ce4b8f9-59ef-47d1-bdeb-bca99787f5a3","Type":"ContainerStarted","Data":"61a61bd29df370de21918a9712ffd836dc98a2d9e9920d57b00afa0f6a6b3bb7"} Oct 14 09:10:53 crc kubenswrapper[5058]: I1014 09:10:53.571496 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-f825p" event={"ID":"fd3e9445-621b-4747-8bb6-25e5c8587b1e","Type":"ContainerStarted","Data":"38278f72764c470d72a1646816331c3c0052fd5f556ae9fd1a9e955cbc20e507"} Oct 14 09:10:53 crc kubenswrapper[5058]: I1014 09:10:53.571742 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:53 crc kubenswrapper[5058]: I1014 09:10:53.590272 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-2php4" podStartSLOduration=2.812636325 podStartE2EDuration="11.590255101s" podCreationTimestamp="2025-10-14 09:10:42 +0000 UTC" firstStartedPulling="2025-10-14 09:10:43.844372642 +0000 UTC m=+8591.755456438" lastFinishedPulling="2025-10-14 09:10:52.621991398 +0000 UTC m=+8600.533075214" observedRunningTime="2025-10-14 09:10:53.583242811 +0000 UTC m=+8601.494326637" watchObservedRunningTime="2025-10-14 09:10:53.590255101 +0000 UTC m=+8601.501338907" Oct 14 09:10:53 crc kubenswrapper[5058]: I1014 09:10:53.591972 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-cc5f78dfc-f825p" Oct 14 09:10:53 crc kubenswrapper[5058]: I1014 09:10:53.610155 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-cc5f78dfc-f825p" podStartSLOduration=3.303980674 podStartE2EDuration="11.61013465s" podCreationTimestamp="2025-10-14 09:10:42 +0000 UTC" firstStartedPulling="2025-10-14 09:10:44.338226062 +0000 UTC m=+8592.249309868" lastFinishedPulling="2025-10-14 09:10:52.644380038 +0000 UTC m=+8600.555463844" observedRunningTime="2025-10-14 09:10:53.604724705 +0000 UTC m=+8601.515808511" watchObservedRunningTime="2025-10-14 09:10:53.61013465 +0000 UTC m=+8601.521218466" Oct 14 09:10:54 crc kubenswrapper[5058]: I1014 09:10:54.066925 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0feb-account-create-p6d7p"] Oct 14 09:10:54 crc kubenswrapper[5058]: I1014 09:10:54.077147 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0feb-account-create-p6d7p"] Oct 14 09:10:54 crc kubenswrapper[5058]: I1014 09:10:54.805080 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3682e492-8b90-4e55-b095-b6af5cb7c3e5" path="/var/lib/kubelet/pods/3682e492-8b90-4e55-b095-b6af5cb7c3e5/volumes" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.654667 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.655298 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" containerName="openstackclient" containerID="cri-o://a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75" gracePeriod=2 Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.666340 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.741778 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 09:10:56 crc kubenswrapper[5058]: E1014 09:10:56.742311 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" containerName="openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.742336 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" containerName="openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.742599 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" containerName="openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.743499 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.810043 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.811017 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config-secret\") pod \"openstackclient\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.811077 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrfp5\" (UniqueName: \"kubernetes.io/projected/2c0ae23e-7623-4381-8820-05c471ae126d-kube-api-access-lrfp5\") pod \"openstackclient\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.811126 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config\") pod \"openstackclient\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.864032 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 14 09:10:56 crc kubenswrapper[5058]: E1014 09:10:56.864768 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-lrfp5 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="2c0ae23e-7623-4381-8820-05c471ae126d" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.891240 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.900879 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.902317 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.923892 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config-secret\") pod \"openstackclient\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.923997 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrfp5\" (UniqueName: \"kubernetes.io/projected/2c0ae23e-7623-4381-8820-05c471ae126d-kube-api-access-lrfp5\") pod \"openstackclient\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.924068 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config\") pod \"openstackclient\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.927726 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config\") pod \"openstackclient\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.964271 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 09:10:56 crc kubenswrapper[5058]: I1014 09:10:56.965114 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config-secret\") pod \"openstackclient\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " pod="openstack/openstackclient" Oct 14 09:10:56 crc kubenswrapper[5058]: E1014 09:10:56.965768 5058 projected.go:194] Error preparing data for projected volume kube-api-access-lrfp5 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (2c0ae23e-7623-4381-8820-05c471ae126d) does not match the UID in record. The object might have been deleted and then recreated Oct 14 09:10:56 crc kubenswrapper[5058]: E1014 09:10:56.970866 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c0ae23e-7623-4381-8820-05c471ae126d-kube-api-access-lrfp5 podName:2c0ae23e-7623-4381-8820-05c471ae126d nodeName:}" failed. No retries permitted until 2025-10-14 09:10:57.470846278 +0000 UTC m=+8605.381930084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lrfp5" (UniqueName: "kubernetes.io/projected/2c0ae23e-7623-4381-8820-05c471ae126d-kube-api-access-lrfp5") pod "openstackclient" (UID: "2c0ae23e-7623-4381-8820-05c471ae126d") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (2c0ae23e-7623-4381-8820-05c471ae126d) does not match the UID in record. The object might have been deleted and then recreated Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.027156 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg7jl\" (UniqueName: \"kubernetes.io/projected/11b87d3e-44ac-4c85-a244-699d47ba80fa-kube-api-access-dg7jl\") pod \"openstackclient\" (UID: \"11b87d3e-44ac-4c85-a244-699d47ba80fa\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.027220 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11b87d3e-44ac-4c85-a244-699d47ba80fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"11b87d3e-44ac-4c85-a244-699d47ba80fa\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.037063 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11b87d3e-44ac-4c85-a244-699d47ba80fa-openstack-config\") pod \"openstackclient\" (UID: \"11b87d3e-44ac-4c85-a244-699d47ba80fa\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.158885 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11b87d3e-44ac-4c85-a244-699d47ba80fa-openstack-config\") pod \"openstackclient\" (UID: \"11b87d3e-44ac-4c85-a244-699d47ba80fa\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.158973 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg7jl\" (UniqueName: \"kubernetes.io/projected/11b87d3e-44ac-4c85-a244-699d47ba80fa-kube-api-access-dg7jl\") pod \"openstackclient\" (UID: \"11b87d3e-44ac-4c85-a244-699d47ba80fa\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.159008 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11b87d3e-44ac-4c85-a244-699d47ba80fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"11b87d3e-44ac-4c85-a244-699d47ba80fa\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.160881 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11b87d3e-44ac-4c85-a244-699d47ba80fa-openstack-config\") pod \"openstackclient\" (UID: \"11b87d3e-44ac-4c85-a244-699d47ba80fa\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.172399 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11b87d3e-44ac-4c85-a244-699d47ba80fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"11b87d3e-44ac-4c85-a244-699d47ba80fa\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.219974 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg7jl\" (UniqueName: \"kubernetes.io/projected/11b87d3e-44ac-4c85-a244-699d47ba80fa-kube-api-access-dg7jl\") pod \"openstackclient\" (UID: \"11b87d3e-44ac-4c85-a244-699d47ba80fa\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.223038 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.224337 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.230140 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ld2hv" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.322569 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.351102 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.369254 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcnn\" (UniqueName: \"kubernetes.io/projected/5cc6f5bd-2c4f-418b-bf74-97db3a201df6-kube-api-access-pxcnn\") pod \"kube-state-metrics-0\" (UID: \"5cc6f5bd-2c4f-418b-bf74-97db3a201df6\") " pod="openstack/kube-state-metrics-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.474015 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrfp5\" (UniqueName: \"kubernetes.io/projected/2c0ae23e-7623-4381-8820-05c471ae126d-kube-api-access-lrfp5\") pod \"openstackclient\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.474066 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxcnn\" (UniqueName: \"kubernetes.io/projected/5cc6f5bd-2c4f-418b-bf74-97db3a201df6-kube-api-access-pxcnn\") pod \"kube-state-metrics-0\" (UID: \"5cc6f5bd-2c4f-418b-bf74-97db3a201df6\") " pod="openstack/kube-state-metrics-0" Oct 14 09:10:57 crc kubenswrapper[5058]: E1014 09:10:57.495063 5058 projected.go:194] Error preparing data for projected volume kube-api-access-lrfp5 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (2c0ae23e-7623-4381-8820-05c471ae126d) does not match the UID in record. The object might have been deleted and then recreated Oct 14 09:10:57 crc kubenswrapper[5058]: E1014 09:10:57.495129 5058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c0ae23e-7623-4381-8820-05c471ae126d-kube-api-access-lrfp5 podName:2c0ae23e-7623-4381-8820-05c471ae126d nodeName:}" failed. No retries permitted until 2025-10-14 09:10:58.495111737 +0000 UTC m=+8606.406195543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lrfp5" (UniqueName: "kubernetes.io/projected/2c0ae23e-7623-4381-8820-05c471ae126d-kube-api-access-lrfp5") pod "openstackclient" (UID: "2c0ae23e-7623-4381-8820-05c471ae126d") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (2c0ae23e-7623-4381-8820-05c471ae126d) does not match the UID in record. The object might have been deleted and then recreated Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.545530 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxcnn\" (UniqueName: \"kubernetes.io/projected/5cc6f5bd-2c4f-418b-bf74-97db3a201df6-kube-api-access-pxcnn\") pod \"kube-state-metrics-0\" (UID: \"5cc6f5bd-2c4f-418b-bf74-97db3a201df6\") " pod="openstack/kube-state-metrics-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.608825 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.627834 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.638229 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2c0ae23e-7623-4381-8820-05c471ae126d" podUID="11b87d3e-44ac-4c85-a244-699d47ba80fa" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.647341 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.791082 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config-secret\") pod \"2c0ae23e-7623-4381-8820-05c471ae126d\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.791843 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config\") pod \"2c0ae23e-7623-4381-8820-05c471ae126d\" (UID: \"2c0ae23e-7623-4381-8820-05c471ae126d\") " Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.792298 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2c0ae23e-7623-4381-8820-05c471ae126d" (UID: "2c0ae23e-7623-4381-8820-05c471ae126d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.793214 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.793244 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrfp5\" (UniqueName: \"kubernetes.io/projected/2c0ae23e-7623-4381-8820-05c471ae126d-kube-api-access-lrfp5\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.795194 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2c0ae23e-7623-4381-8820-05c471ae126d" (UID: "2c0ae23e-7623-4381-8820-05c471ae126d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.884001 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.892358 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.894543 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2c0ae23e-7623-4381-8820-05c471ae126d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.897241 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.897394 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.897443 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-lwbnr" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.897538 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.935975 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.998890 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.998945 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.999008 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.999118 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.999155 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:57 crc kubenswrapper[5058]: I1014 09:10:57.999182 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7js6m\" (UniqueName: \"kubernetes.io/projected/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-kube-api-access-7js6m\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.101540 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.101860 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.101894 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7js6m\" (UniqueName: \"kubernetes.io/projected/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-kube-api-access-7js6m\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.101928 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.101948 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.101990 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.102136 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.108764 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.109365 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.111285 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.119126 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.140774 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7js6m\" (UniqueName: \"kubernetes.io/projected/b3f02b3a-74f4-4e01-ba89-48f0d8c48dac-kube-api-access-7js6m\") pod \"alertmanager-metric-storage-0\" (UID: \"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.222929 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.492033 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.502702 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.507295 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.507573 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-mntxq" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.507791 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.507936 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.508063 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.508187 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.519998 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.613707 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0b5cd705-197d-4565-b4d8-77cf76abc4c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b5cd705-197d-4565-b4d8-77cf76abc4c5\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.613780 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmzcm\" (UniqueName: \"kubernetes.io/projected/4d38fa47-7bea-4cd8-be24-67ed0f250750-kube-api-access-xmzcm\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.613825 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d38fa47-7bea-4cd8-be24-67ed0f250750-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.613884 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d38fa47-7bea-4cd8-be24-67ed0f250750-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.613966 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d38fa47-7bea-4cd8-be24-67ed0f250750-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.613981 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d38fa47-7bea-4cd8-be24-67ed0f250750-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.614010 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d38fa47-7bea-4cd8-be24-67ed0f250750-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.614119 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d38fa47-7bea-4cd8-be24-67ed0f250750-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.637420 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.645378 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2c0ae23e-7623-4381-8820-05c471ae126d" podUID="11b87d3e-44ac-4c85-a244-699d47ba80fa" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.720915 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d38fa47-7bea-4cd8-be24-67ed0f250750-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.721025 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d38fa47-7bea-4cd8-be24-67ed0f250750-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.721047 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d38fa47-7bea-4cd8-be24-67ed0f250750-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.721077 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d38fa47-7bea-4cd8-be24-67ed0f250750-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.721119 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d38fa47-7bea-4cd8-be24-67ed0f250750-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.721149 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0b5cd705-197d-4565-b4d8-77cf76abc4c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b5cd705-197d-4565-b4d8-77cf76abc4c5\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.721186 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmzcm\" (UniqueName: \"kubernetes.io/projected/4d38fa47-7bea-4cd8-be24-67ed0f250750-kube-api-access-xmzcm\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.721210 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d38fa47-7bea-4cd8-be24-67ed0f250750-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.728267 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d38fa47-7bea-4cd8-be24-67ed0f250750-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.728885 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d38fa47-7bea-4cd8-be24-67ed0f250750-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.729148 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d38fa47-7bea-4cd8-be24-67ed0f250750-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.732964 5058 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.733021 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0b5cd705-197d-4565-b4d8-77cf76abc4c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b5cd705-197d-4565-b4d8-77cf76abc4c5\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/39f22c31828f247bafb2b3926a51b0a5bc106a1815163a3754e0ab8d8cef07c0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.733412 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d38fa47-7bea-4cd8-be24-67ed0f250750-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.733445 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d38fa47-7bea-4cd8-be24-67ed0f250750-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.737254 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d38fa47-7bea-4cd8-be24-67ed0f250750-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.753789 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmzcm\" (UniqueName: \"kubernetes.io/projected/4d38fa47-7bea-4cd8-be24-67ed0f250750-kube-api-access-xmzcm\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.791140 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0b5cd705-197d-4565-b4d8-77cf76abc4c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b5cd705-197d-4565-b4d8-77cf76abc4c5\") pod \"prometheus-metric-storage-0\" (UID: \"4d38fa47-7bea-4cd8-be24-67ed0f250750\") " pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.805702 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:10:58 crc kubenswrapper[5058]: E1014 09:10:58.806080 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.837536 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.861587 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0ae23e-7623-4381-8820-05c471ae126d" path="/var/lib/kubelet/pods/2c0ae23e-7623-4381-8820-05c471ae126d/volumes" Oct 14 09:10:58 crc kubenswrapper[5058]: I1014 09:10:58.873301 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.020496 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 09:10:59 crc kubenswrapper[5058]: W1014 09:10:59.076990 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11b87d3e_44ac_4c85_a244_699d47ba80fa.slice/crio-e30947cdbe543bc4f2ed8c6e550c94dd708ceaecacfd3d8606c0c70b79293814 WatchSource:0}: Error finding container e30947cdbe543bc4f2ed8c6e550c94dd708ceaecacfd3d8606c0c70b79293814: Status 404 returned error can't find the container with id e30947cdbe543bc4f2ed8c6e550c94dd708ceaecacfd3d8606c0c70b79293814 Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.148742 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.469007 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.552028 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.663734 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"11b87d3e-44ac-4c85-a244-699d47ba80fa","Type":"ContainerStarted","Data":"e30947cdbe543bc4f2ed8c6e550c94dd708ceaecacfd3d8606c0c70b79293814"} Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.664601 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac","Type":"ContainerStarted","Data":"e148cefc52f45d1b8cac7b89cc0f924a92bcbfba06b3560b63b45271fd49d11b"} Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.666053 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5cc6f5bd-2c4f-418b-bf74-97db3a201df6","Type":"ContainerStarted","Data":"32fe7781a3a24f51045a8a977356cb3d73fd853bd5015551d7942c9a07aa2d36"} Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.673149 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4d38fa47-7bea-4cd8-be24-67ed0f250750","Type":"ContainerStarted","Data":"ca724a919beec5710044f213884646a361c5501f368e5e01d537767aac5b4837"} Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.690193 5058 generic.go:334] "Generic (PLEG): container finished" podID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" containerID="a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75" exitCode=137 Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.690245 5058 scope.go:117] "RemoveContainer" containerID="a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.690361 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.697473 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config\") pod \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.697653 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwjjm\" (UniqueName: \"kubernetes.io/projected/49766830-cc2e-4d82-9a6d-a30bc53fa60b-kube-api-access-jwjjm\") pod \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.697847 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config-secret\") pod \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\" (UID: \"49766830-cc2e-4d82-9a6d-a30bc53fa60b\") " Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.706440 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49766830-cc2e-4d82-9a6d-a30bc53fa60b-kube-api-access-jwjjm" (OuterVolumeSpecName: "kube-api-access-jwjjm") pod "49766830-cc2e-4d82-9a6d-a30bc53fa60b" (UID: "49766830-cc2e-4d82-9a6d-a30bc53fa60b"). InnerVolumeSpecName "kube-api-access-jwjjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.724349 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "49766830-cc2e-4d82-9a6d-a30bc53fa60b" (UID: "49766830-cc2e-4d82-9a6d-a30bc53fa60b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.743617 5058 scope.go:117] "RemoveContainer" containerID="a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75" Oct 14 09:10:59 crc kubenswrapper[5058]: E1014 09:10:59.752009 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75\": container with ID starting with a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75 not found: ID does not exist" containerID="a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.752277 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75"} err="failed to get container status \"a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75\": rpc error: code = NotFound desc = could not find container \"a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75\": container with ID starting with a75740e54617140957ad1c648989f119057b5550e80f07d4c147a15bfa9fdf75 not found: ID does not exist" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.759398 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "49766830-cc2e-4d82-9a6d-a30bc53fa60b" (UID: "49766830-cc2e-4d82-9a6d-a30bc53fa60b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.802829 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.802896 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/49766830-cc2e-4d82-9a6d-a30bc53fa60b-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:10:59 crc kubenswrapper[5058]: I1014 09:10:59.802909 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwjjm\" (UniqueName: \"kubernetes.io/projected/49766830-cc2e-4d82-9a6d-a30bc53fa60b-kube-api-access-jwjjm\") on node \"crc\" DevicePath \"\"" Oct 14 09:11:00 crc kubenswrapper[5058]: I1014 09:11:00.013165 5058 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" podUID="11b87d3e-44ac-4c85-a244-699d47ba80fa" Oct 14 09:11:00 crc kubenswrapper[5058]: I1014 09:11:00.702346 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"11b87d3e-44ac-4c85-a244-699d47ba80fa","Type":"ContainerStarted","Data":"159335fcfb04760e577889158e998853934959c78926be06e9f52eecd851b613"} Oct 14 09:11:00 crc kubenswrapper[5058]: I1014 09:11:00.704461 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5cc6f5bd-2c4f-418b-bf74-97db3a201df6","Type":"ContainerStarted","Data":"bc7bc243d2a0543eb4c3f4ee01483517e109489e242e5e0c75d3b56b770c8969"} Oct 14 09:11:00 crc kubenswrapper[5058]: I1014 09:11:00.704642 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 09:11:00 crc kubenswrapper[5058]: I1014 09:11:00.720688 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.7206723109999995 podStartE2EDuration="4.720672311s" podCreationTimestamp="2025-10-14 09:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:11:00.715495763 +0000 UTC m=+8608.626579569" watchObservedRunningTime="2025-10-14 09:11:00.720672311 +0000 UTC m=+8608.631756117" Oct 14 09:11:00 crc kubenswrapper[5058]: I1014 09:11:00.736947 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.186604591 podStartE2EDuration="3.736930726s" podCreationTimestamp="2025-10-14 09:10:57 +0000 UTC" firstStartedPulling="2025-10-14 09:10:58.882254078 +0000 UTC m=+8606.793337874" lastFinishedPulling="2025-10-14 09:10:59.432580203 +0000 UTC m=+8607.343664009" observedRunningTime="2025-10-14 09:11:00.730720768 +0000 UTC m=+8608.641804574" watchObservedRunningTime="2025-10-14 09:11:00.736930726 +0000 UTC m=+8608.648014532" Oct 14 09:11:00 crc kubenswrapper[5058]: I1014 09:11:00.800972 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49766830-cc2e-4d82-9a6d-a30bc53fa60b" path="/var/lib/kubelet/pods/49766830-cc2e-4d82-9a6d-a30bc53fa60b/volumes" Oct 14 09:11:03 crc kubenswrapper[5058]: I1014 09:11:03.048680 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4v8rq"] Oct 14 09:11:03 crc kubenswrapper[5058]: I1014 09:11:03.064072 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4v8rq"] Oct 14 09:11:04 crc kubenswrapper[5058]: I1014 09:11:04.801244 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400309c2-becd-4fa9-ab65-fe1a18c3011e" path="/var/lib/kubelet/pods/400309c2-becd-4fa9-ab65-fe1a18c3011e/volumes" Oct 14 09:11:05 crc kubenswrapper[5058]: I1014 09:11:05.766033 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac","Type":"ContainerStarted","Data":"3f61979fde7d5d36737b052992b00c041238189c6e843b9fc8042d6b5047f566"} Oct 14 09:11:06 crc kubenswrapper[5058]: I1014 09:11:06.781892 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4d38fa47-7bea-4cd8-be24-67ed0f250750","Type":"ContainerStarted","Data":"4a3517b35a56b293d09027223dfe3307e9ee320fe966a0fe31570b15297efe4c"} Oct 14 09:11:07 crc kubenswrapper[5058]: I1014 09:11:07.615348 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 09:11:10 crc kubenswrapper[5058]: I1014 09:11:10.791194 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:11:10 crc kubenswrapper[5058]: E1014 09:11:10.791975 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:11:13 crc kubenswrapper[5058]: I1014 09:11:13.866253 5058 generic.go:334] "Generic (PLEG): container finished" podID="b3f02b3a-74f4-4e01-ba89-48f0d8c48dac" containerID="3f61979fde7d5d36737b052992b00c041238189c6e843b9fc8042d6b5047f566" exitCode=0 Oct 14 09:11:13 crc kubenswrapper[5058]: I1014 09:11:13.866944 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac","Type":"ContainerDied","Data":"3f61979fde7d5d36737b052992b00c041238189c6e843b9fc8042d6b5047f566"} Oct 14 09:11:14 crc kubenswrapper[5058]: I1014 09:11:14.889582 5058 generic.go:334] "Generic (PLEG): container finished" podID="4d38fa47-7bea-4cd8-be24-67ed0f250750" containerID="4a3517b35a56b293d09027223dfe3307e9ee320fe966a0fe31570b15297efe4c" exitCode=0 Oct 14 09:11:14 crc kubenswrapper[5058]: I1014 09:11:14.889628 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4d38fa47-7bea-4cd8-be24-67ed0f250750","Type":"ContainerDied","Data":"4a3517b35a56b293d09027223dfe3307e9ee320fe966a0fe31570b15297efe4c"} Oct 14 09:11:16 crc kubenswrapper[5058]: I1014 09:11:16.914525 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac","Type":"ContainerStarted","Data":"c0433ab72874f54b6b32f86bd1689400ae8af1bd3ece871e110c243324fcb7ba"} Oct 14 09:11:21 crc kubenswrapper[5058]: I1014 09:11:21.790273 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:11:21 crc kubenswrapper[5058]: E1014 09:11:21.791087 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:11:21 crc kubenswrapper[5058]: I1014 09:11:21.977632 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"b3f02b3a-74f4-4e01-ba89-48f0d8c48dac","Type":"ContainerStarted","Data":"39bf64728fbb0304638d3ed0fb192eeece660be4e5c7ad1acaa49aee58230119"} Oct 14 09:11:21 crc kubenswrapper[5058]: I1014 09:11:21.978138 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Oct 14 09:11:21 crc kubenswrapper[5058]: I1014 09:11:21.983916 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Oct 14 09:11:22 crc kubenswrapper[5058]: I1014 09:11:22.002374 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=8.150907938 podStartE2EDuration="25.002357865s" podCreationTimestamp="2025-10-14 09:10:57 +0000 UTC" firstStartedPulling="2025-10-14 09:10:59.365068443 +0000 UTC m=+8607.276152259" lastFinishedPulling="2025-10-14 09:11:16.21651834 +0000 UTC m=+8624.127602186" observedRunningTime="2025-10-14 09:11:22.000186493 +0000 UTC m=+8629.911270299" watchObservedRunningTime="2025-10-14 09:11:22.002357865 +0000 UTC m=+8629.913441671" Oct 14 09:11:26 crc kubenswrapper[5058]: I1014 09:11:26.025831 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4d38fa47-7bea-4cd8-be24-67ed0f250750","Type":"ContainerStarted","Data":"746d14fa6af6b1e0b01373781c5e974dea4160a296ff0b8d056a84579e34602c"} Oct 14 09:11:30 crc kubenswrapper[5058]: I1014 09:11:30.075163 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4d38fa47-7bea-4cd8-be24-67ed0f250750","Type":"ContainerStarted","Data":"7120dc009295d6595826113c55d8af7a0885f2dfc9c4e6de47d4eb4873a84354"} Oct 14 09:11:33 crc kubenswrapper[5058]: I1014 09:11:33.790860 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:11:33 crc kubenswrapper[5058]: E1014 09:11:33.792008 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:11:34 crc kubenswrapper[5058]: I1014 09:11:34.127279 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4d38fa47-7bea-4cd8-be24-67ed0f250750","Type":"ContainerStarted","Data":"fdfac96ee1679e8acd03a8efdfda134109a052e2f83a259a5deb2e720d20520b"} Oct 14 09:11:34 crc kubenswrapper[5058]: I1014 09:11:34.156570 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.64822808 podStartE2EDuration="37.156551832s" podCreationTimestamp="2025-10-14 09:10:57 +0000 UTC" firstStartedPulling="2025-10-14 09:10:59.514882636 +0000 UTC m=+8607.425966442" lastFinishedPulling="2025-10-14 09:11:33.023206348 +0000 UTC m=+8640.934290194" observedRunningTime="2025-10-14 09:11:34.147503263 +0000 UTC m=+8642.058587119" watchObservedRunningTime="2025-10-14 09:11:34.156551832 +0000 UTC m=+8642.067635638" Oct 14 09:11:38 crc kubenswrapper[5058]: I1014 09:11:38.838304 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.364319 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.368083 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.370872 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.371002 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.383540 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.483174 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.483268 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-config-data\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.483390 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-run-httpd\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.483487 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-log-httpd\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.483586 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hkpv\" (UniqueName: \"kubernetes.io/projected/f4ddb4c0-63f9-477a-8377-21fb869500fb-kube-api-access-9hkpv\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.483642 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.483715 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-scripts\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.587878 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-run-httpd\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.587951 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-log-httpd\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.587996 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hkpv\" (UniqueName: \"kubernetes.io/projected/f4ddb4c0-63f9-477a-8377-21fb869500fb-kube-api-access-9hkpv\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.588025 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.588058 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-scripts\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.588097 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.588122 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-config-data\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.593927 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-run-httpd\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.594149 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-log-httpd\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.613350 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-config-data\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.616353 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.630458 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.631046 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-scripts\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.650821 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hkpv\" (UniqueName: \"kubernetes.io/projected/f4ddb4c0-63f9-477a-8377-21fb869500fb-kube-api-access-9hkpv\") pod \"ceilometer-0\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.779310 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.838651 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 14 09:11:43 crc kubenswrapper[5058]: I1014 09:11:43.840740 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 14 09:11:44 crc kubenswrapper[5058]: I1014 09:11:44.073670 5058 scope.go:117] "RemoveContainer" containerID="1289ff511a077089dde8e7317484106ffd4bf2a18974fae848317a11c511d497" Oct 14 09:11:44 crc kubenswrapper[5058]: I1014 09:11:44.111724 5058 scope.go:117] "RemoveContainer" containerID="92b6a5f7af34cb7d00fc011bd64e68044c377b710cdb4989041059e9eb2ae08e" Oct 14 09:11:44 crc kubenswrapper[5058]: I1014 09:11:44.181038 5058 scope.go:117] "RemoveContainer" containerID="349b5abb6bd73993447989e751814842dad8345ff132bc3a44ca231f779cca36" Oct 14 09:11:44 crc kubenswrapper[5058]: I1014 09:11:44.245163 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 14 09:11:44 crc kubenswrapper[5058]: I1014 09:11:44.316612 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 09:11:45 crc kubenswrapper[5058]: I1014 09:11:45.256606 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerStarted","Data":"16d5d2abb14b79bc346e5a70da7fbed721525309b611d0c736286267f2ff2234"} Oct 14 09:11:45 crc kubenswrapper[5058]: I1014 09:11:45.789616 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:11:45 crc kubenswrapper[5058]: E1014 09:11:45.789898 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:11:50 crc kubenswrapper[5058]: I1014 09:11:50.344229 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerStarted","Data":"9cbfcd776706161640a2dcd03df51fdfcd237916e14718bb545fbb87a06b8802"} Oct 14 09:11:51 crc kubenswrapper[5058]: I1014 09:11:51.362272 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerStarted","Data":"a7262ca38a9fe7389c80e0018af6258badf2e4ec991f1403eb76339013605542"} Oct 14 09:11:52 crc kubenswrapper[5058]: I1014 09:11:52.388418 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerStarted","Data":"85041c507981afb4e69282efa5767a2281b796cf5d0abfa41e9fbc35c9360a3d"} Oct 14 09:11:59 crc kubenswrapper[5058]: I1014 09:11:59.791231 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:11:59 crc kubenswrapper[5058]: E1014 09:11:59.792157 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:12:05 crc kubenswrapper[5058]: I1014 09:12:05.571535 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerStarted","Data":"11bfd8b598a6853332e6f2af1ebc9fa8134147ea4566cec50db4f9ca29b6a8e2"} Oct 14 09:12:05 crc kubenswrapper[5058]: I1014 09:12:05.576311 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 09:12:05 crc kubenswrapper[5058]: I1014 09:12:05.604262 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.768139734 podStartE2EDuration="22.604240329s" podCreationTimestamp="2025-10-14 09:11:43 +0000 UTC" firstStartedPulling="2025-10-14 09:11:44.354085395 +0000 UTC m=+8652.265169201" lastFinishedPulling="2025-10-14 09:12:05.19018599 +0000 UTC m=+8673.101269796" observedRunningTime="2025-10-14 09:12:05.596393984 +0000 UTC m=+8673.507477830" watchObservedRunningTime="2025-10-14 09:12:05.604240329 +0000 UTC m=+8673.515324145" Oct 14 09:12:09 crc kubenswrapper[5058]: I1014 09:12:09.039256 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-54zzh"] Oct 14 09:12:09 crc kubenswrapper[5058]: I1014 09:12:09.050674 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-54zzh"] Oct 14 09:12:10 crc kubenswrapper[5058]: I1014 09:12:10.557699 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-fbdrz"] Oct 14 09:12:10 crc kubenswrapper[5058]: I1014 09:12:10.560694 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fbdrz" Oct 14 09:12:10 crc kubenswrapper[5058]: I1014 09:12:10.576720 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-fbdrz"] Oct 14 09:12:10 crc kubenswrapper[5058]: I1014 09:12:10.727138 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn24j\" (UniqueName: \"kubernetes.io/projected/14878d9d-4680-4800-9951-2af5c7617134-kube-api-access-kn24j\") pod \"aodh-db-create-fbdrz\" (UID: \"14878d9d-4680-4800-9951-2af5c7617134\") " pod="openstack/aodh-db-create-fbdrz" Oct 14 09:12:10 crc kubenswrapper[5058]: I1014 09:12:10.802545 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654849e1-fb87-435b-9e65-c6f93143a639" path="/var/lib/kubelet/pods/654849e1-fb87-435b-9e65-c6f93143a639/volumes" Oct 14 09:12:10 crc kubenswrapper[5058]: I1014 09:12:10.829058 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn24j\" (UniqueName: \"kubernetes.io/projected/14878d9d-4680-4800-9951-2af5c7617134-kube-api-access-kn24j\") pod \"aodh-db-create-fbdrz\" (UID: \"14878d9d-4680-4800-9951-2af5c7617134\") " pod="openstack/aodh-db-create-fbdrz" Oct 14 09:12:10 crc kubenswrapper[5058]: I1014 09:12:10.854844 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn24j\" (UniqueName: \"kubernetes.io/projected/14878d9d-4680-4800-9951-2af5c7617134-kube-api-access-kn24j\") pod \"aodh-db-create-fbdrz\" (UID: \"14878d9d-4680-4800-9951-2af5c7617134\") " pod="openstack/aodh-db-create-fbdrz" Oct 14 09:12:10 crc kubenswrapper[5058]: I1014 09:12:10.930312 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fbdrz" Oct 14 09:12:11 crc kubenswrapper[5058]: I1014 09:12:11.420318 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-fbdrz"] Oct 14 09:12:11 crc kubenswrapper[5058]: I1014 09:12:11.657885 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fbdrz" event={"ID":"14878d9d-4680-4800-9951-2af5c7617134","Type":"ContainerStarted","Data":"0b96349128eb9812630db74092622915e513a6ed27f81f17fcce0e583e3ee205"} Oct 14 09:12:11 crc kubenswrapper[5058]: I1014 09:12:11.657928 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fbdrz" event={"ID":"14878d9d-4680-4800-9951-2af5c7617134","Type":"ContainerStarted","Data":"08c2b02d79df7cefff5c5c170b994b9634c3f35e8013f2cac74896bc9cf2b24e"} Oct 14 09:12:11 crc kubenswrapper[5058]: I1014 09:12:11.678695 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-fbdrz" podStartSLOduration=1.678678356 podStartE2EDuration="1.678678356s" podCreationTimestamp="2025-10-14 09:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:12:11.672097047 +0000 UTC m=+8679.583180883" watchObservedRunningTime="2025-10-14 09:12:11.678678356 +0000 UTC m=+8679.589762152" Oct 14 09:12:11 crc kubenswrapper[5058]: I1014 09:12:11.789816 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:12:11 crc kubenswrapper[5058]: E1014 09:12:11.790084 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:12:12 crc kubenswrapper[5058]: I1014 09:12:12.671771 5058 generic.go:334] "Generic (PLEG): container finished" podID="14878d9d-4680-4800-9951-2af5c7617134" containerID="0b96349128eb9812630db74092622915e513a6ed27f81f17fcce0e583e3ee205" exitCode=0 Oct 14 09:12:12 crc kubenswrapper[5058]: I1014 09:12:12.671882 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fbdrz" event={"ID":"14878d9d-4680-4800-9951-2af5c7617134","Type":"ContainerDied","Data":"0b96349128eb9812630db74092622915e513a6ed27f81f17fcce0e583e3ee205"} Oct 14 09:12:14 crc kubenswrapper[5058]: I1014 09:12:14.036246 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fbdrz" Oct 14 09:12:14 crc kubenswrapper[5058]: I1014 09:12:14.198174 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn24j\" (UniqueName: \"kubernetes.io/projected/14878d9d-4680-4800-9951-2af5c7617134-kube-api-access-kn24j\") pod \"14878d9d-4680-4800-9951-2af5c7617134\" (UID: \"14878d9d-4680-4800-9951-2af5c7617134\") " Oct 14 09:12:14 crc kubenswrapper[5058]: I1014 09:12:14.203874 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14878d9d-4680-4800-9951-2af5c7617134-kube-api-access-kn24j" (OuterVolumeSpecName: "kube-api-access-kn24j") pod "14878d9d-4680-4800-9951-2af5c7617134" (UID: "14878d9d-4680-4800-9951-2af5c7617134"). InnerVolumeSpecName "kube-api-access-kn24j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:12:14 crc kubenswrapper[5058]: I1014 09:12:14.300516 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn24j\" (UniqueName: \"kubernetes.io/projected/14878d9d-4680-4800-9951-2af5c7617134-kube-api-access-kn24j\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:14 crc kubenswrapper[5058]: I1014 09:12:14.696395 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fbdrz" event={"ID":"14878d9d-4680-4800-9951-2af5c7617134","Type":"ContainerDied","Data":"08c2b02d79df7cefff5c5c170b994b9634c3f35e8013f2cac74896bc9cf2b24e"} Oct 14 09:12:14 crc kubenswrapper[5058]: I1014 09:12:14.696698 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08c2b02d79df7cefff5c5c170b994b9634c3f35e8013f2cac74896bc9cf2b24e" Oct 14 09:12:14 crc kubenswrapper[5058]: I1014 09:12:14.696464 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fbdrz" Oct 14 09:12:19 crc kubenswrapper[5058]: I1014 09:12:19.039878 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fcbb-account-create-pt9ld"] Oct 14 09:12:19 crc kubenswrapper[5058]: I1014 09:12:19.057297 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fcbb-account-create-pt9ld"] Oct 14 09:12:20 crc kubenswrapper[5058]: I1014 09:12:20.836479 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="936ed4d4-9a04-4d09-b93d-e37419454c38" path="/var/lib/kubelet/pods/936ed4d4-9a04-4d09-b93d-e37419454c38/volumes" Oct 14 09:12:20 crc kubenswrapper[5058]: I1014 09:12:20.859595 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-c776-account-create-4s52d"] Oct 14 09:12:20 crc kubenswrapper[5058]: E1014 09:12:20.860959 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14878d9d-4680-4800-9951-2af5c7617134" containerName="mariadb-database-create" Oct 14 09:12:20 crc kubenswrapper[5058]: I1014 09:12:20.860993 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="14878d9d-4680-4800-9951-2af5c7617134" containerName="mariadb-database-create" Oct 14 09:12:20 crc kubenswrapper[5058]: I1014 09:12:20.862041 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="14878d9d-4680-4800-9951-2af5c7617134" containerName="mariadb-database-create" Oct 14 09:12:20 crc kubenswrapper[5058]: I1014 09:12:20.865269 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-c776-account-create-4s52d" Oct 14 09:12:20 crc kubenswrapper[5058]: I1014 09:12:20.868986 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Oct 14 09:12:20 crc kubenswrapper[5058]: I1014 09:12:20.882904 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-c776-account-create-4s52d"] Oct 14 09:12:20 crc kubenswrapper[5058]: I1014 09:12:20.958151 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b82vs\" (UniqueName: \"kubernetes.io/projected/231a4daf-37c1-4500-83f1-708c2ac4c2c6-kube-api-access-b82vs\") pod \"aodh-c776-account-create-4s52d\" (UID: \"231a4daf-37c1-4500-83f1-708c2ac4c2c6\") " pod="openstack/aodh-c776-account-create-4s52d" Oct 14 09:12:21 crc kubenswrapper[5058]: I1014 09:12:21.060542 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b82vs\" (UniqueName: \"kubernetes.io/projected/231a4daf-37c1-4500-83f1-708c2ac4c2c6-kube-api-access-b82vs\") pod \"aodh-c776-account-create-4s52d\" (UID: \"231a4daf-37c1-4500-83f1-708c2ac4c2c6\") " pod="openstack/aodh-c776-account-create-4s52d" Oct 14 09:12:21 crc kubenswrapper[5058]: I1014 09:12:21.083177 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b82vs\" (UniqueName: \"kubernetes.io/projected/231a4daf-37c1-4500-83f1-708c2ac4c2c6-kube-api-access-b82vs\") pod \"aodh-c776-account-create-4s52d\" (UID: \"231a4daf-37c1-4500-83f1-708c2ac4c2c6\") " pod="openstack/aodh-c776-account-create-4s52d" Oct 14 09:12:21 crc kubenswrapper[5058]: I1014 09:12:21.194532 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-c776-account-create-4s52d" Oct 14 09:12:21 crc kubenswrapper[5058]: I1014 09:12:21.677565 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-c776-account-create-4s52d"] Oct 14 09:12:21 crc kubenswrapper[5058]: W1014 09:12:21.681312 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod231a4daf_37c1_4500_83f1_708c2ac4c2c6.slice/crio-5d9b1718ca33fca9ab686f4f3dea54c0d4eada3ce4621db2347a38fe25055ecc WatchSource:0}: Error finding container 5d9b1718ca33fca9ab686f4f3dea54c0d4eada3ce4621db2347a38fe25055ecc: Status 404 returned error can't find the container with id 5d9b1718ca33fca9ab686f4f3dea54c0d4eada3ce4621db2347a38fe25055ecc Oct 14 09:12:21 crc kubenswrapper[5058]: I1014 09:12:21.780624 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-c776-account-create-4s52d" event={"ID":"231a4daf-37c1-4500-83f1-708c2ac4c2c6","Type":"ContainerStarted","Data":"5d9b1718ca33fca9ab686f4f3dea54c0d4eada3ce4621db2347a38fe25055ecc"} Oct 14 09:12:22 crc kubenswrapper[5058]: I1014 09:12:22.803917 5058 generic.go:334] "Generic (PLEG): container finished" podID="231a4daf-37c1-4500-83f1-708c2ac4c2c6" containerID="187773785e0d4b5865d98996893c53fec9176786e4460ca9e05272deda1c59ba" exitCode=0 Oct 14 09:12:22 crc kubenswrapper[5058]: I1014 09:12:22.828729 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-c776-account-create-4s52d" event={"ID":"231a4daf-37c1-4500-83f1-708c2ac4c2c6","Type":"ContainerDied","Data":"187773785e0d4b5865d98996893c53fec9176786e4460ca9e05272deda1c59ba"} Oct 14 09:12:24 crc kubenswrapper[5058]: I1014 09:12:24.273091 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-c776-account-create-4s52d" Oct 14 09:12:24 crc kubenswrapper[5058]: I1014 09:12:24.430407 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b82vs\" (UniqueName: \"kubernetes.io/projected/231a4daf-37c1-4500-83f1-708c2ac4c2c6-kube-api-access-b82vs\") pod \"231a4daf-37c1-4500-83f1-708c2ac4c2c6\" (UID: \"231a4daf-37c1-4500-83f1-708c2ac4c2c6\") " Oct 14 09:12:24 crc kubenswrapper[5058]: I1014 09:12:24.438861 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231a4daf-37c1-4500-83f1-708c2ac4c2c6-kube-api-access-b82vs" (OuterVolumeSpecName: "kube-api-access-b82vs") pod "231a4daf-37c1-4500-83f1-708c2ac4c2c6" (UID: "231a4daf-37c1-4500-83f1-708c2ac4c2c6"). InnerVolumeSpecName "kube-api-access-b82vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:12:24 crc kubenswrapper[5058]: I1014 09:12:24.533266 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b82vs\" (UniqueName: \"kubernetes.io/projected/231a4daf-37c1-4500-83f1-708c2ac4c2c6-kube-api-access-b82vs\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:24 crc kubenswrapper[5058]: I1014 09:12:24.790317 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:12:24 crc kubenswrapper[5058]: E1014 09:12:24.790927 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:12:24 crc kubenswrapper[5058]: I1014 09:12:24.834944 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-c776-account-create-4s52d" event={"ID":"231a4daf-37c1-4500-83f1-708c2ac4c2c6","Type":"ContainerDied","Data":"5d9b1718ca33fca9ab686f4f3dea54c0d4eada3ce4621db2347a38fe25055ecc"} Oct 14 09:12:24 crc kubenswrapper[5058]: I1014 09:12:24.835106 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9b1718ca33fca9ab686f4f3dea54c0d4eada3ce4621db2347a38fe25055ecc" Oct 14 09:12:24 crc kubenswrapper[5058]: I1014 09:12:24.835302 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-c776-account-create-4s52d" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.241711 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-nmf2l"] Oct 14 09:12:26 crc kubenswrapper[5058]: E1014 09:12:26.242521 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231a4daf-37c1-4500-83f1-708c2ac4c2c6" containerName="mariadb-account-create" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.242539 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="231a4daf-37c1-4500-83f1-708c2ac4c2c6" containerName="mariadb-account-create" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.242776 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="231a4daf-37c1-4500-83f1-708c2ac4c2c6" containerName="mariadb-account-create" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.243723 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.246992 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.247338 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.248121 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kc5mr" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.263937 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-nmf2l"] Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.379903 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-scripts\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.379969 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhhwt\" (UniqueName: \"kubernetes.io/projected/8f9a3643-6329-4117-b099-e8b649dc7906-kube-api-access-mhhwt\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.380145 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-combined-ca-bundle\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.380188 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-config-data\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.481862 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-combined-ca-bundle\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.481909 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-config-data\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.481996 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-scripts\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.482033 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhhwt\" (UniqueName: \"kubernetes.io/projected/8f9a3643-6329-4117-b099-e8b649dc7906-kube-api-access-mhhwt\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.488869 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-config-data\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.491957 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-combined-ca-bundle\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.500316 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-scripts\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.508302 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhhwt\" (UniqueName: \"kubernetes.io/projected/8f9a3643-6329-4117-b099-e8b649dc7906-kube-api-access-mhhwt\") pod \"aodh-db-sync-nmf2l\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:26 crc kubenswrapper[5058]: I1014 09:12:26.589272 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:27 crc kubenswrapper[5058]: I1014 09:12:27.381788 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-nmf2l"] Oct 14 09:12:27 crc kubenswrapper[5058]: W1014 09:12:27.385974 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9a3643_6329_4117_b099_e8b649dc7906.slice/crio-e876d04db51d0a2d07a56bd3ef716f3d44330a2d34b5b5d8b4dfc88fdd58f5d1 WatchSource:0}: Error finding container e876d04db51d0a2d07a56bd3ef716f3d44330a2d34b5b5d8b4dfc88fdd58f5d1: Status 404 returned error can't find the container with id e876d04db51d0a2d07a56bd3ef716f3d44330a2d34b5b5d8b4dfc88fdd58f5d1 Oct 14 09:12:27 crc kubenswrapper[5058]: I1014 09:12:27.907662 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nmf2l" event={"ID":"8f9a3643-6329-4117-b099-e8b649dc7906","Type":"ContainerStarted","Data":"e876d04db51d0a2d07a56bd3ef716f3d44330a2d34b5b5d8b4dfc88fdd58f5d1"} Oct 14 09:12:32 crc kubenswrapper[5058]: I1014 09:12:32.968160 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nmf2l" event={"ID":"8f9a3643-6329-4117-b099-e8b649dc7906","Type":"ContainerStarted","Data":"7a9cd5b982c8dc6e3198b3f7084f5cea40228d44d0bc69cbf6f8ec4c502a32ea"} Oct 14 09:12:32 crc kubenswrapper[5058]: I1014 09:12:32.992119 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-nmf2l" podStartSLOduration=2.38103688 podStartE2EDuration="6.992104717s" podCreationTimestamp="2025-10-14 09:12:26 +0000 UTC" firstStartedPulling="2025-10-14 09:12:27.389111 +0000 UTC m=+8695.300194806" lastFinishedPulling="2025-10-14 09:12:32.000178827 +0000 UTC m=+8699.911262643" observedRunningTime="2025-10-14 09:12:32.983953804 +0000 UTC m=+8700.895037600" watchObservedRunningTime="2025-10-14 09:12:32.992104717 +0000 UTC m=+8700.903188523" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.040229 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ngqc4"] Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.045903 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.053635 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ngqc4"] Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.184007 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-utilities\") pod \"redhat-operators-ngqc4\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.184139 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4q7r\" (UniqueName: \"kubernetes.io/projected/059d5692-f818-42c2-afac-4f0972a36ebf-kube-api-access-n4q7r\") pod \"redhat-operators-ngqc4\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.184539 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-catalog-content\") pod \"redhat-operators-ngqc4\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.287883 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-catalog-content\") pod \"redhat-operators-ngqc4\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.288228 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-utilities\") pod \"redhat-operators-ngqc4\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.288355 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4q7r\" (UniqueName: \"kubernetes.io/projected/059d5692-f818-42c2-afac-4f0972a36ebf-kube-api-access-n4q7r\") pod \"redhat-operators-ngqc4\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.288480 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-catalog-content\") pod \"redhat-operators-ngqc4\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.289108 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-utilities\") pod \"redhat-operators-ngqc4\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.311697 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4q7r\" (UniqueName: \"kubernetes.io/projected/059d5692-f818-42c2-afac-4f0972a36ebf-kube-api-access-n4q7r\") pod \"redhat-operators-ngqc4\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.374607 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:34 crc kubenswrapper[5058]: I1014 09:12:34.901546 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ngqc4"] Oct 14 09:12:34 crc kubenswrapper[5058]: W1014 09:12:34.903196 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod059d5692_f818_42c2_afac_4f0972a36ebf.slice/crio-5163d1be080a824dc053a5ef73ebe57ee2e357559dc6c021b3aab01fe5665c9d WatchSource:0}: Error finding container 5163d1be080a824dc053a5ef73ebe57ee2e357559dc6c021b3aab01fe5665c9d: Status 404 returned error can't find the container with id 5163d1be080a824dc053a5ef73ebe57ee2e357559dc6c021b3aab01fe5665c9d Oct 14 09:12:35 crc kubenswrapper[5058]: I1014 09:12:35.002598 5058 generic.go:334] "Generic (PLEG): container finished" podID="8f9a3643-6329-4117-b099-e8b649dc7906" containerID="7a9cd5b982c8dc6e3198b3f7084f5cea40228d44d0bc69cbf6f8ec4c502a32ea" exitCode=0 Oct 14 09:12:35 crc kubenswrapper[5058]: I1014 09:12:35.002684 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nmf2l" event={"ID":"8f9a3643-6329-4117-b099-e8b649dc7906","Type":"ContainerDied","Data":"7a9cd5b982c8dc6e3198b3f7084f5cea40228d44d0bc69cbf6f8ec4c502a32ea"} Oct 14 09:12:35 crc kubenswrapper[5058]: I1014 09:12:35.007276 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngqc4" event={"ID":"059d5692-f818-42c2-afac-4f0972a36ebf","Type":"ContainerStarted","Data":"5163d1be080a824dc053a5ef73ebe57ee2e357559dc6c021b3aab01fe5665c9d"} Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.035764 5058 generic.go:334] "Generic (PLEG): container finished" podID="059d5692-f818-42c2-afac-4f0972a36ebf" containerID="8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986" exitCode=0 Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.038060 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngqc4" event={"ID":"059d5692-f818-42c2-afac-4f0972a36ebf","Type":"ContainerDied","Data":"8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986"} Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.454748 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.536620 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-scripts\") pod \"8f9a3643-6329-4117-b099-e8b649dc7906\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.536743 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-combined-ca-bundle\") pod \"8f9a3643-6329-4117-b099-e8b649dc7906\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.537038 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhhwt\" (UniqueName: \"kubernetes.io/projected/8f9a3643-6329-4117-b099-e8b649dc7906-kube-api-access-mhhwt\") pod \"8f9a3643-6329-4117-b099-e8b649dc7906\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.537070 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-config-data\") pod \"8f9a3643-6329-4117-b099-e8b649dc7906\" (UID: \"8f9a3643-6329-4117-b099-e8b649dc7906\") " Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.546128 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9a3643-6329-4117-b099-e8b649dc7906-kube-api-access-mhhwt" (OuterVolumeSpecName: "kube-api-access-mhhwt") pod "8f9a3643-6329-4117-b099-e8b649dc7906" (UID: "8f9a3643-6329-4117-b099-e8b649dc7906"). InnerVolumeSpecName "kube-api-access-mhhwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.546832 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-scripts" (OuterVolumeSpecName: "scripts") pod "8f9a3643-6329-4117-b099-e8b649dc7906" (UID: "8f9a3643-6329-4117-b099-e8b649dc7906"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.568784 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f9a3643-6329-4117-b099-e8b649dc7906" (UID: "8f9a3643-6329-4117-b099-e8b649dc7906"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.600085 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-config-data" (OuterVolumeSpecName: "config-data") pod "8f9a3643-6329-4117-b099-e8b649dc7906" (UID: "8f9a3643-6329-4117-b099-e8b649dc7906"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.639630 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhhwt\" (UniqueName: \"kubernetes.io/projected/8f9a3643-6329-4117-b099-e8b649dc7906-kube-api-access-mhhwt\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.639663 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.639673 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:36 crc kubenswrapper[5058]: I1014 09:12:36.639682 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9a3643-6329-4117-b099-e8b649dc7906-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:37 crc kubenswrapper[5058]: I1014 09:12:37.072955 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngqc4" event={"ID":"059d5692-f818-42c2-afac-4f0972a36ebf","Type":"ContainerStarted","Data":"494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880"} Oct 14 09:12:37 crc kubenswrapper[5058]: I1014 09:12:37.079511 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-nmf2l" event={"ID":"8f9a3643-6329-4117-b099-e8b649dc7906","Type":"ContainerDied","Data":"e876d04db51d0a2d07a56bd3ef716f3d44330a2d34b5b5d8b4dfc88fdd58f5d1"} Oct 14 09:12:37 crc kubenswrapper[5058]: I1014 09:12:37.079556 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-nmf2l" Oct 14 09:12:37 crc kubenswrapper[5058]: I1014 09:12:37.079666 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e876d04db51d0a2d07a56bd3ef716f3d44330a2d34b5b5d8b4dfc88fdd58f5d1" Oct 14 09:12:37 crc kubenswrapper[5058]: I1014 09:12:37.790046 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:12:37 crc kubenswrapper[5058]: E1014 09:12:37.790433 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.653512 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 14 09:12:40 crc kubenswrapper[5058]: E1014 09:12:40.654389 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9a3643-6329-4117-b099-e8b649dc7906" containerName="aodh-db-sync" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.654411 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9a3643-6329-4117-b099-e8b649dc7906" containerName="aodh-db-sync" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.654716 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9a3643-6329-4117-b099-e8b649dc7906" containerName="aodh-db-sync" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.657733 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.661406 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kc5mr" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.662276 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.662615 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.682141 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.725201 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpnp\" (UniqueName: \"kubernetes.io/projected/349423a8-3dec-46c0-b56a-ae58820cc454-kube-api-access-rjpnp\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.725256 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349423a8-3dec-46c0-b56a-ae58820cc454-combined-ca-bundle\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.725380 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349423a8-3dec-46c0-b56a-ae58820cc454-config-data\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.725467 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/349423a8-3dec-46c0-b56a-ae58820cc454-scripts\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.827437 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349423a8-3dec-46c0-b56a-ae58820cc454-config-data\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.827526 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/349423a8-3dec-46c0-b56a-ae58820cc454-scripts\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.827596 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjpnp\" (UniqueName: \"kubernetes.io/projected/349423a8-3dec-46c0-b56a-ae58820cc454-kube-api-access-rjpnp\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.827616 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349423a8-3dec-46c0-b56a-ae58820cc454-combined-ca-bundle\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.832719 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/349423a8-3dec-46c0-b56a-ae58820cc454-scripts\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.842978 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjpnp\" (UniqueName: \"kubernetes.io/projected/349423a8-3dec-46c0-b56a-ae58820cc454-kube-api-access-rjpnp\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.843292 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349423a8-3dec-46c0-b56a-ae58820cc454-combined-ca-bundle\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.843553 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349423a8-3dec-46c0-b56a-ae58820cc454-config-data\") pod \"aodh-0\" (UID: \"349423a8-3dec-46c0-b56a-ae58820cc454\") " pod="openstack/aodh-0" Oct 14 09:12:40 crc kubenswrapper[5058]: I1014 09:12:40.999597 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 14 09:12:41 crc kubenswrapper[5058]: I1014 09:12:41.132242 5058 generic.go:334] "Generic (PLEG): container finished" podID="059d5692-f818-42c2-afac-4f0972a36ebf" containerID="494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880" exitCode=0 Oct 14 09:12:41 crc kubenswrapper[5058]: I1014 09:12:41.132274 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngqc4" event={"ID":"059d5692-f818-42c2-afac-4f0972a36ebf","Type":"ContainerDied","Data":"494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880"} Oct 14 09:12:41 crc kubenswrapper[5058]: I1014 09:12:41.657317 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 14 09:12:42 crc kubenswrapper[5058]: I1014 09:12:42.144035 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngqc4" event={"ID":"059d5692-f818-42c2-afac-4f0972a36ebf","Type":"ContainerStarted","Data":"23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922"} Oct 14 09:12:42 crc kubenswrapper[5058]: I1014 09:12:42.146179 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"349423a8-3dec-46c0-b56a-ae58820cc454","Type":"ContainerStarted","Data":"724d83f68a9b22a14074131b4ac56b6e6dd84ca1c7c0d81a2d866ffd55c06555"} Oct 14 09:12:42 crc kubenswrapper[5058]: I1014 09:12:42.146219 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"349423a8-3dec-46c0-b56a-ae58820cc454","Type":"ContainerStarted","Data":"85d0dab94f5780e8b65130e2af1b514551262dfff0d628bf0e7aadcd8caeb0db"} Oct 14 09:12:42 crc kubenswrapper[5058]: I1014 09:12:42.164545 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ngqc4" podStartSLOduration=3.551946939 podStartE2EDuration="9.164523821s" podCreationTimestamp="2025-10-14 09:12:33 +0000 UTC" firstStartedPulling="2025-10-14 09:12:36.044277924 +0000 UTC m=+8703.955361740" lastFinishedPulling="2025-10-14 09:12:41.656854816 +0000 UTC m=+8709.567938622" observedRunningTime="2025-10-14 09:12:42.160546777 +0000 UTC m=+8710.071630583" watchObservedRunningTime="2025-10-14 09:12:42.164523821 +0000 UTC m=+8710.075607627" Oct 14 09:12:43 crc kubenswrapper[5058]: I1014 09:12:43.226136 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 09:12:43 crc kubenswrapper[5058]: I1014 09:12:43.229614 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="ceilometer-central-agent" containerID="cri-o://9cbfcd776706161640a2dcd03df51fdfcd237916e14718bb545fbb87a06b8802" gracePeriod=30 Oct 14 09:12:43 crc kubenswrapper[5058]: I1014 09:12:43.230248 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="proxy-httpd" containerID="cri-o://11bfd8b598a6853332e6f2af1ebc9fa8134147ea4566cec50db4f9ca29b6a8e2" gracePeriod=30 Oct 14 09:12:43 crc kubenswrapper[5058]: I1014 09:12:43.230441 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="ceilometer-notification-agent" containerID="cri-o://a7262ca38a9fe7389c80e0018af6258badf2e4ec991f1403eb76339013605542" gracePeriod=30 Oct 14 09:12:43 crc kubenswrapper[5058]: I1014 09:12:43.230491 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="sg-core" containerID="cri-o://85041c507981afb4e69282efa5767a2281b796cf5d0abfa41e9fbc35c9360a3d" gracePeriod=30 Oct 14 09:12:43 crc kubenswrapper[5058]: I1014 09:12:43.240250 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.1.162:3000/\": EOF" Oct 14 09:12:43 crc kubenswrapper[5058]: I1014 09:12:43.781186 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.1.162:3000/\": dial tcp 10.217.1.162:3000: connect: connection refused" Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.197198 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"349423a8-3dec-46c0-b56a-ae58820cc454","Type":"ContainerStarted","Data":"4b39307ad16625b9c0d0c89e48039bdaf70b45f7d85fdee7ef9e9147209e595b"} Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.205101 5058 generic.go:334] "Generic (PLEG): container finished" podID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerID="11bfd8b598a6853332e6f2af1ebc9fa8134147ea4566cec50db4f9ca29b6a8e2" exitCode=0 Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.205133 5058 generic.go:334] "Generic (PLEG): container finished" podID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerID="85041c507981afb4e69282efa5767a2281b796cf5d0abfa41e9fbc35c9360a3d" exitCode=2 Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.205140 5058 generic.go:334] "Generic (PLEG): container finished" podID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerID="9cbfcd776706161640a2dcd03df51fdfcd237916e14718bb545fbb87a06b8802" exitCode=0 Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.205160 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerDied","Data":"11bfd8b598a6853332e6f2af1ebc9fa8134147ea4566cec50db4f9ca29b6a8e2"} Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.205187 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerDied","Data":"85041c507981afb4e69282efa5767a2281b796cf5d0abfa41e9fbc35c9360a3d"} Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.205198 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerDied","Data":"9cbfcd776706161640a2dcd03df51fdfcd237916e14718bb545fbb87a06b8802"} Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.371162 5058 scope.go:117] "RemoveContainer" containerID="c058211268c94a15b18e8e233879faf2a1d91479fe0b1fbef79060470c35ab35" Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.375063 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.376195 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:44 crc kubenswrapper[5058]: I1014 09:12:44.666257 5058 scope.go:117] "RemoveContainer" containerID="5d39a317798c6675328a3c7a8471f1c21cafaa388ffc98d402efe4a346c1e2b9" Oct 14 09:12:45 crc kubenswrapper[5058]: I1014 09:12:45.042400 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5h4hc"] Oct 14 09:12:45 crc kubenswrapper[5058]: I1014 09:12:45.051036 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5h4hc"] Oct 14 09:12:45 crc kubenswrapper[5058]: I1014 09:12:45.222324 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"349423a8-3dec-46c0-b56a-ae58820cc454","Type":"ContainerStarted","Data":"2ad4b254df8f21a108d1816ce30785f49080670decd7802abef7f5fe548f617d"} Oct 14 09:12:45 crc kubenswrapper[5058]: I1014 09:12:45.442564 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ngqc4" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" containerName="registry-server" probeResult="failure" output=< Oct 14 09:12:45 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 09:12:45 crc kubenswrapper[5058]: > Oct 14 09:12:46 crc kubenswrapper[5058]: I1014 09:12:46.808874 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84942974-990b-4917-805c-681c6c6038bf" path="/var/lib/kubelet/pods/84942974-990b-4917-805c-681c6c6038bf/volumes" Oct 14 09:12:47 crc kubenswrapper[5058]: I1014 09:12:47.244258 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"349423a8-3dec-46c0-b56a-ae58820cc454","Type":"ContainerStarted","Data":"40283dd4dc372b7743406daf2af9f20932464bf9a5a1cc46a59c32573053b41c"} Oct 14 09:12:47 crc kubenswrapper[5058]: I1014 09:12:47.283527 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.904546999 podStartE2EDuration="7.28350841s" podCreationTimestamp="2025-10-14 09:12:40 +0000 UTC" firstStartedPulling="2025-10-14 09:12:41.681904642 +0000 UTC m=+8709.592988448" lastFinishedPulling="2025-10-14 09:12:46.060866053 +0000 UTC m=+8713.971949859" observedRunningTime="2025-10-14 09:12:47.263014774 +0000 UTC m=+8715.174098580" watchObservedRunningTime="2025-10-14 09:12:47.28350841 +0000 UTC m=+8715.194592216" Oct 14 09:12:48 crc kubenswrapper[5058]: I1014 09:12:48.257728 5058 generic.go:334] "Generic (PLEG): container finished" podID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerID="a7262ca38a9fe7389c80e0018af6258badf2e4ec991f1403eb76339013605542" exitCode=0 Oct 14 09:12:48 crc kubenswrapper[5058]: I1014 09:12:48.257821 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerDied","Data":"a7262ca38a9fe7389c80e0018af6258badf2e4ec991f1403eb76339013605542"} Oct 14 09:12:48 crc kubenswrapper[5058]: I1014 09:12:48.915288 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.017372 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-combined-ca-bundle\") pod \"f4ddb4c0-63f9-477a-8377-21fb869500fb\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.017442 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-scripts\") pod \"f4ddb4c0-63f9-477a-8377-21fb869500fb\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.017540 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hkpv\" (UniqueName: \"kubernetes.io/projected/f4ddb4c0-63f9-477a-8377-21fb869500fb-kube-api-access-9hkpv\") pod \"f4ddb4c0-63f9-477a-8377-21fb869500fb\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.017557 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-sg-core-conf-yaml\") pod \"f4ddb4c0-63f9-477a-8377-21fb869500fb\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.017611 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-log-httpd\") pod \"f4ddb4c0-63f9-477a-8377-21fb869500fb\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.017649 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-config-data\") pod \"f4ddb4c0-63f9-477a-8377-21fb869500fb\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.017662 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-run-httpd\") pod \"f4ddb4c0-63f9-477a-8377-21fb869500fb\" (UID: \"f4ddb4c0-63f9-477a-8377-21fb869500fb\") " Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.018461 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f4ddb4c0-63f9-477a-8377-21fb869500fb" (UID: "f4ddb4c0-63f9-477a-8377-21fb869500fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.018985 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f4ddb4c0-63f9-477a-8377-21fb869500fb" (UID: "f4ddb4c0-63f9-477a-8377-21fb869500fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.023071 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-scripts" (OuterVolumeSpecName: "scripts") pod "f4ddb4c0-63f9-477a-8377-21fb869500fb" (UID: "f4ddb4c0-63f9-477a-8377-21fb869500fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.029898 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ddb4c0-63f9-477a-8377-21fb869500fb-kube-api-access-9hkpv" (OuterVolumeSpecName: "kube-api-access-9hkpv") pod "f4ddb4c0-63f9-477a-8377-21fb869500fb" (UID: "f4ddb4c0-63f9-477a-8377-21fb869500fb"). InnerVolumeSpecName "kube-api-access-9hkpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.046274 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f4ddb4c0-63f9-477a-8377-21fb869500fb" (UID: "f4ddb4c0-63f9-477a-8377-21fb869500fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.120258 5058 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.120287 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hkpv\" (UniqueName: \"kubernetes.io/projected/f4ddb4c0-63f9-477a-8377-21fb869500fb-kube-api-access-9hkpv\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.120298 5058 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.120309 5058 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.120318 5058 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4ddb4c0-63f9-477a-8377-21fb869500fb-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.121945 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4ddb4c0-63f9-477a-8377-21fb869500fb" (UID: "f4ddb4c0-63f9-477a-8377-21fb869500fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.137011 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-config-data" (OuterVolumeSpecName: "config-data") pod "f4ddb4c0-63f9-477a-8377-21fb869500fb" (UID: "f4ddb4c0-63f9-477a-8377-21fb869500fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.222040 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.222070 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ddb4c0-63f9-477a-8377-21fb869500fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.277303 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4ddb4c0-63f9-477a-8377-21fb869500fb","Type":"ContainerDied","Data":"16d5d2abb14b79bc346e5a70da7fbed721525309b611d0c736286267f2ff2234"} Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.277369 5058 scope.go:117] "RemoveContainer" containerID="11bfd8b598a6853332e6f2af1ebc9fa8134147ea4566cec50db4f9ca29b6a8e2" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.277373 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.303339 5058 scope.go:117] "RemoveContainer" containerID="85041c507981afb4e69282efa5767a2281b796cf5d0abfa41e9fbc35c9360a3d" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.328598 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.336922 5058 scope.go:117] "RemoveContainer" containerID="a7262ca38a9fe7389c80e0018af6258badf2e4ec991f1403eb76339013605542" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.354471 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.364712 5058 scope.go:117] "RemoveContainer" containerID="9cbfcd776706161640a2dcd03df51fdfcd237916e14718bb545fbb87a06b8802" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.369468 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 09:12:49 crc kubenswrapper[5058]: E1014 09:12:49.369969 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="proxy-httpd" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.369994 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="proxy-httpd" Oct 14 09:12:49 crc kubenswrapper[5058]: E1014 09:12:49.370047 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="ceilometer-central-agent" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.370057 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="ceilometer-central-agent" Oct 14 09:12:49 crc kubenswrapper[5058]: E1014 09:12:49.370066 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="ceilometer-notification-agent" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.370074 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="ceilometer-notification-agent" Oct 14 09:12:49 crc kubenswrapper[5058]: E1014 09:12:49.370116 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="sg-core" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.370126 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="sg-core" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.370424 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="ceilometer-notification-agent" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.370459 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="sg-core" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.370482 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="ceilometer-central-agent" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.370498 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" containerName="proxy-httpd" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.372967 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.376985 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.377261 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.380293 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.528701 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.528783 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-scripts\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.528875 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-log-httpd\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.528908 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.528949 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-config-data\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.529060 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rhgf\" (UniqueName: \"kubernetes.io/projected/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-kube-api-access-8rhgf\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.529173 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-run-httpd\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.641220 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rhgf\" (UniqueName: \"kubernetes.io/projected/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-kube-api-access-8rhgf\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.641614 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-run-httpd\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.641684 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.641725 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-scripts\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.641748 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.641760 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-log-httpd\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.641783 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-config-data\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.642990 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-log-httpd\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.643108 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-run-httpd\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.646091 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-scripts\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.646575 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.647243 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-config-data\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.647620 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.658609 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rhgf\" (UniqueName: \"kubernetes.io/projected/c1a6c29d-5dfc-4217-86d4-e4ea9618176b-kube-api-access-8rhgf\") pod \"ceilometer-0\" (UID: \"c1a6c29d-5dfc-4217-86d4-e4ea9618176b\") " pod="openstack/ceilometer-0" Oct 14 09:12:49 crc kubenswrapper[5058]: I1014 09:12:49.692709 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 09:12:50 crc kubenswrapper[5058]: I1014 09:12:50.155715 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 09:12:50 crc kubenswrapper[5058]: W1014 09:12:50.156488 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a6c29d_5dfc_4217_86d4_e4ea9618176b.slice/crio-ceeb22d14ebae8d721ca0ef9a0428f24c95ca50dd8cd291e09058dff8bf5ee3e WatchSource:0}: Error finding container ceeb22d14ebae8d721ca0ef9a0428f24c95ca50dd8cd291e09058dff8bf5ee3e: Status 404 returned error can't find the container with id ceeb22d14ebae8d721ca0ef9a0428f24c95ca50dd8cd291e09058dff8bf5ee3e Oct 14 09:12:50 crc kubenswrapper[5058]: I1014 09:12:50.327053 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a6c29d-5dfc-4217-86d4-e4ea9618176b","Type":"ContainerStarted","Data":"ceeb22d14ebae8d721ca0ef9a0428f24c95ca50dd8cd291e09058dff8bf5ee3e"} Oct 14 09:12:50 crc kubenswrapper[5058]: I1014 09:12:50.807491 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ddb4c0-63f9-477a-8377-21fb869500fb" path="/var/lib/kubelet/pods/f4ddb4c0-63f9-477a-8377-21fb869500fb/volumes" Oct 14 09:12:51 crc kubenswrapper[5058]: I1014 09:12:51.354882 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a6c29d-5dfc-4217-86d4-e4ea9618176b","Type":"ContainerStarted","Data":"eb7268bb847d190b1945df63678bf38e8c8fe7f47c15ab10431a61f94c20656e"} Oct 14 09:12:51 crc kubenswrapper[5058]: I1014 09:12:51.792275 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:12:51 crc kubenswrapper[5058]: E1014 09:12:51.792662 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:12:52 crc kubenswrapper[5058]: I1014 09:12:52.368608 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a6c29d-5dfc-4217-86d4-e4ea9618176b","Type":"ContainerStarted","Data":"4e1b4a4cb54f684659a13c9eb8b69fc258e31e06e2bc63268b45f1f9a3a28fe7"} Oct 14 09:12:53 crc kubenswrapper[5058]: I1014 09:12:53.388542 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a6c29d-5dfc-4217-86d4-e4ea9618176b","Type":"ContainerStarted","Data":"fe6ac67e31da5784d1e181105652769d8311b610f6429fc1ee877800be9b03f9"} Oct 14 09:12:54 crc kubenswrapper[5058]: I1014 09:12:54.426378 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a6c29d-5dfc-4217-86d4-e4ea9618176b","Type":"ContainerStarted","Data":"6913ffa80151ece9157493dbd7414c52be61a0c1ddc061145efe3bd3bbe0e741"} Oct 14 09:12:54 crc kubenswrapper[5058]: I1014 09:12:54.426660 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 09:12:54 crc kubenswrapper[5058]: I1014 09:12:54.454129 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:54 crc kubenswrapper[5058]: I1014 09:12:54.458584 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.112108806 podStartE2EDuration="5.458563166s" podCreationTimestamp="2025-10-14 09:12:49 +0000 UTC" firstStartedPulling="2025-10-14 09:12:50.159305884 +0000 UTC m=+8718.070389690" lastFinishedPulling="2025-10-14 09:12:53.505760244 +0000 UTC m=+8721.416844050" observedRunningTime="2025-10-14 09:12:54.454663814 +0000 UTC m=+8722.365747630" watchObservedRunningTime="2025-10-14 09:12:54.458563166 +0000 UTC m=+8722.369646972" Oct 14 09:12:54 crc kubenswrapper[5058]: I1014 09:12:54.530898 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:54 crc kubenswrapper[5058]: I1014 09:12:54.717289 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ngqc4"] Oct 14 09:12:56 crc kubenswrapper[5058]: I1014 09:12:56.447048 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ngqc4" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" containerName="registry-server" containerID="cri-o://23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922" gracePeriod=2 Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.020089 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.212119 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4q7r\" (UniqueName: \"kubernetes.io/projected/059d5692-f818-42c2-afac-4f0972a36ebf-kube-api-access-n4q7r\") pod \"059d5692-f818-42c2-afac-4f0972a36ebf\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.212666 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-utilities\") pod \"059d5692-f818-42c2-afac-4f0972a36ebf\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.212790 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-catalog-content\") pod \"059d5692-f818-42c2-afac-4f0972a36ebf\" (UID: \"059d5692-f818-42c2-afac-4f0972a36ebf\") " Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.213252 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-utilities" (OuterVolumeSpecName: "utilities") pod "059d5692-f818-42c2-afac-4f0972a36ebf" (UID: "059d5692-f818-42c2-afac-4f0972a36ebf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.223944 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/059d5692-f818-42c2-afac-4f0972a36ebf-kube-api-access-n4q7r" (OuterVolumeSpecName: "kube-api-access-n4q7r") pod "059d5692-f818-42c2-afac-4f0972a36ebf" (UID: "059d5692-f818-42c2-afac-4f0972a36ebf"). InnerVolumeSpecName "kube-api-access-n4q7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.304501 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "059d5692-f818-42c2-afac-4f0972a36ebf" (UID: "059d5692-f818-42c2-afac-4f0972a36ebf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.314849 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4q7r\" (UniqueName: \"kubernetes.io/projected/059d5692-f818-42c2-afac-4f0972a36ebf-kube-api-access-n4q7r\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.314885 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.314896 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059d5692-f818-42c2-afac-4f0972a36ebf-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.464989 5058 generic.go:334] "Generic (PLEG): container finished" podID="059d5692-f818-42c2-afac-4f0972a36ebf" containerID="23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922" exitCode=0 Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.465051 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngqc4" event={"ID":"059d5692-f818-42c2-afac-4f0972a36ebf","Type":"ContainerDied","Data":"23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922"} Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.465104 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngqc4" event={"ID":"059d5692-f818-42c2-afac-4f0972a36ebf","Type":"ContainerDied","Data":"5163d1be080a824dc053a5ef73ebe57ee2e357559dc6c021b3aab01fe5665c9d"} Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.465134 5058 scope.go:117] "RemoveContainer" containerID="23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.466160 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngqc4" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.505986 5058 scope.go:117] "RemoveContainer" containerID="494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.512777 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ngqc4"] Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.558386 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ngqc4"] Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.568789 5058 scope.go:117] "RemoveContainer" containerID="8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.597046 5058 scope.go:117] "RemoveContainer" containerID="23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922" Oct 14 09:12:57 crc kubenswrapper[5058]: E1014 09:12:57.600180 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922\": container with ID starting with 23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922 not found: ID does not exist" containerID="23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.600236 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922"} err="failed to get container status \"23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922\": rpc error: code = NotFound desc = could not find container \"23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922\": container with ID starting with 23819d157c8b6ad944d0e19d0b9ca3028cf7c64e27bce34a18c480758a833922 not found: ID does not exist" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.600268 5058 scope.go:117] "RemoveContainer" containerID="494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880" Oct 14 09:12:57 crc kubenswrapper[5058]: E1014 09:12:57.600592 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880\": container with ID starting with 494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880 not found: ID does not exist" containerID="494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.600635 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880"} err="failed to get container status \"494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880\": rpc error: code = NotFound desc = could not find container \"494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880\": container with ID starting with 494c8722aab97f9649941ee376bd3bf40a2a149ced6a3ea0aa130d2f6c49d880 not found: ID does not exist" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.600661 5058 scope.go:117] "RemoveContainer" containerID="8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986" Oct 14 09:12:57 crc kubenswrapper[5058]: E1014 09:12:57.600943 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986\": container with ID starting with 8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986 not found: ID does not exist" containerID="8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986" Oct 14 09:12:57 crc kubenswrapper[5058]: I1014 09:12:57.600991 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986"} err="failed to get container status \"8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986\": rpc error: code = NotFound desc = could not find container \"8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986\": container with ID starting with 8af460b0f4343bc5a70c55cce3c02bd21a63ad38410a34d2dd8d980abf2d8986 not found: ID does not exist" Oct 14 09:12:58 crc kubenswrapper[5058]: I1014 09:12:58.801080 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" path="/var/lib/kubelet/pods/059d5692-f818-42c2-afac-4f0972a36ebf/volumes" Oct 14 09:13:03 crc kubenswrapper[5058]: I1014 09:13:03.790030 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:13:03 crc kubenswrapper[5058]: E1014 09:13:03.790872 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:13:16 crc kubenswrapper[5058]: I1014 09:13:16.789672 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:13:16 crc kubenswrapper[5058]: E1014 09:13:16.790761 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:13:19 crc kubenswrapper[5058]: I1014 09:13:19.705745 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.593774 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbb9dd4f-fptk7"] Oct 14 09:13:26 crc kubenswrapper[5058]: E1014 09:13:26.594859 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" containerName="extract-utilities" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.594876 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" containerName="extract-utilities" Oct 14 09:13:26 crc kubenswrapper[5058]: E1014 09:13:26.594906 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" containerName="extract-content" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.594915 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" containerName="extract-content" Oct 14 09:13:26 crc kubenswrapper[5058]: E1014 09:13:26.594928 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" containerName="registry-server" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.594937 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" containerName="registry-server" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.595200 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="059d5692-f818-42c2-afac-4f0972a36ebf" containerName="registry-server" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.596606 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.602510 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.613115 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbb9dd4f-fptk7"] Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.703848 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-openstack-cell1\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.704248 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-dns-svc\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.704502 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-config\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.704546 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz4fw\" (UniqueName: \"kubernetes.io/projected/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-kube-api-access-vz4fw\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.704576 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-nb\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.704693 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-sb\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.817029 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz4fw\" (UniqueName: \"kubernetes.io/projected/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-kube-api-access-vz4fw\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.817127 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-nb\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.817233 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-sb\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.817269 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-openstack-cell1\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.817329 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-dns-svc\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.817429 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-config\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.820694 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-sb\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.820705 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-openstack-cell1\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.823266 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-config\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.823641 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-dns-svc\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.823990 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-nb\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.838478 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz4fw\" (UniqueName: \"kubernetes.io/projected/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-kube-api-access-vz4fw\") pod \"dnsmasq-dns-bbb9dd4f-fptk7\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:26 crc kubenswrapper[5058]: I1014 09:13:26.980599 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:27 crc kubenswrapper[5058]: I1014 09:13:27.565369 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbb9dd4f-fptk7"] Oct 14 09:13:27 crc kubenswrapper[5058]: I1014 09:13:27.866529 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" event={"ID":"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf","Type":"ContainerStarted","Data":"2c7e1431f09bbc9de1169cc954a927669e69f1771a3b4abc68cd5690a6492d49"} Oct 14 09:13:28 crc kubenswrapper[5058]: I1014 09:13:28.881468 5058 generic.go:334] "Generic (PLEG): container finished" podID="4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" containerID="8368099f4b3e722b1138cae5b456baa1580e75cccdabb76dc4d144864b1894b5" exitCode=0 Oct 14 09:13:28 crc kubenswrapper[5058]: I1014 09:13:28.881560 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" event={"ID":"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf","Type":"ContainerDied","Data":"8368099f4b3e722b1138cae5b456baa1580e75cccdabb76dc4d144864b1894b5"} Oct 14 09:13:29 crc kubenswrapper[5058]: I1014 09:13:29.897996 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" event={"ID":"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf","Type":"ContainerStarted","Data":"45677642ac2bcbe25476c78502be8d9f71418aced6113b2b8143412788159b74"} Oct 14 09:13:29 crc kubenswrapper[5058]: I1014 09:13:29.898365 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:29 crc kubenswrapper[5058]: I1014 09:13:29.930958 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" podStartSLOduration=3.930929824 podStartE2EDuration="3.930929824s" podCreationTimestamp="2025-10-14 09:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:13:29.92449175 +0000 UTC m=+8757.835575646" watchObservedRunningTime="2025-10-14 09:13:29.930929824 +0000 UTC m=+8757.842013670" Oct 14 09:13:31 crc kubenswrapper[5058]: I1014 09:13:31.791210 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:13:31 crc kubenswrapper[5058]: E1014 09:13:31.792158 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.381868 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpx8z"] Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.385766 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.405035 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpx8z"] Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.526392 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-catalog-content\") pod \"redhat-marketplace-vpx8z\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.526484 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-utilities\") pod \"redhat-marketplace-vpx8z\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.526567 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vdn\" (UniqueName: \"kubernetes.io/projected/fac57268-12f0-4aa6-97ca-5b980717ccee-kube-api-access-h8vdn\") pod \"redhat-marketplace-vpx8z\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.627949 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-catalog-content\") pod \"redhat-marketplace-vpx8z\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.628084 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-utilities\") pod \"redhat-marketplace-vpx8z\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.628201 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vdn\" (UniqueName: \"kubernetes.io/projected/fac57268-12f0-4aa6-97ca-5b980717ccee-kube-api-access-h8vdn\") pod \"redhat-marketplace-vpx8z\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.628464 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-catalog-content\") pod \"redhat-marketplace-vpx8z\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.628619 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-utilities\") pod \"redhat-marketplace-vpx8z\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.648911 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vdn\" (UniqueName: \"kubernetes.io/projected/fac57268-12f0-4aa6-97ca-5b980717ccee-kube-api-access-h8vdn\") pod \"redhat-marketplace-vpx8z\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:34 crc kubenswrapper[5058]: I1014 09:13:34.712189 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:35 crc kubenswrapper[5058]: I1014 09:13:35.215129 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpx8z"] Oct 14 09:13:35 crc kubenswrapper[5058]: I1014 09:13:35.980747 5058 generic.go:334] "Generic (PLEG): container finished" podID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerID="cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103" exitCode=0 Oct 14 09:13:35 crc kubenswrapper[5058]: I1014 09:13:35.981549 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpx8z" event={"ID":"fac57268-12f0-4aa6-97ca-5b980717ccee","Type":"ContainerDied","Data":"cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103"} Oct 14 09:13:35 crc kubenswrapper[5058]: I1014 09:13:35.981634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpx8z" event={"ID":"fac57268-12f0-4aa6-97ca-5b980717ccee","Type":"ContainerStarted","Data":"c345b252599013b3261e21d9294bece449e2afe321d9ef8136823923a1a826b0"} Oct 14 09:13:35 crc kubenswrapper[5058]: I1014 09:13:35.984872 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:13:36 crc kubenswrapper[5058]: I1014 09:13:36.983037 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.051619 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d859d846c-b7t7c"] Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.052363 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" podUID="2ee4bc78-886a-4387-b827-5a1822a278cc" containerName="dnsmasq-dns" containerID="cri-o://8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c" gracePeriod=10 Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.200699 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cfc7bffb5-j9chd"] Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.203104 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.219409 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cfc7bffb5-j9chd"] Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.296935 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-dns-svc\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.297113 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-sb\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.297273 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-config\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.297373 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-nb\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.297442 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf245\" (UniqueName: \"kubernetes.io/projected/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-kube-api-access-nf245\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.297562 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-openstack-cell1\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.399576 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-dns-svc\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.400094 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-sb\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.400165 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-config\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.400212 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-nb\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.400248 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf245\" (UniqueName: \"kubernetes.io/projected/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-kube-api-access-nf245\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.400305 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-openstack-cell1\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.401169 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-openstack-cell1\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.401677 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-dns-svc\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.402230 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-sb\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.424743 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-config\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.424944 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-nb\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.448328 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf245\" (UniqueName: \"kubernetes.io/projected/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-kube-api-access-nf245\") pod \"dnsmasq-dns-6cfc7bffb5-j9chd\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.548900 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.640050 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.806967 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-dns-svc\") pod \"2ee4bc78-886a-4387-b827-5a1822a278cc\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.807311 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk9wn\" (UniqueName: \"kubernetes.io/projected/2ee4bc78-886a-4387-b827-5a1822a278cc-kube-api-access-bk9wn\") pod \"2ee4bc78-886a-4387-b827-5a1822a278cc\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.807406 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-sb\") pod \"2ee4bc78-886a-4387-b827-5a1822a278cc\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.807462 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-nb\") pod \"2ee4bc78-886a-4387-b827-5a1822a278cc\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.807489 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-config\") pod \"2ee4bc78-886a-4387-b827-5a1822a278cc\" (UID: \"2ee4bc78-886a-4387-b827-5a1822a278cc\") " Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.816568 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee4bc78-886a-4387-b827-5a1822a278cc-kube-api-access-bk9wn" (OuterVolumeSpecName: "kube-api-access-bk9wn") pod "2ee4bc78-886a-4387-b827-5a1822a278cc" (UID: "2ee4bc78-886a-4387-b827-5a1822a278cc"). InnerVolumeSpecName "kube-api-access-bk9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.865439 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ee4bc78-886a-4387-b827-5a1822a278cc" (UID: "2ee4bc78-886a-4387-b827-5a1822a278cc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.867203 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ee4bc78-886a-4387-b827-5a1822a278cc" (UID: "2ee4bc78-886a-4387-b827-5a1822a278cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.867904 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-config" (OuterVolumeSpecName: "config") pod "2ee4bc78-886a-4387-b827-5a1822a278cc" (UID: "2ee4bc78-886a-4387-b827-5a1822a278cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.868369 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ee4bc78-886a-4387-b827-5a1822a278cc" (UID: "2ee4bc78-886a-4387-b827-5a1822a278cc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.912419 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.912464 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.912477 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.912492 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk9wn\" (UniqueName: \"kubernetes.io/projected/2ee4bc78-886a-4387-b827-5a1822a278cc-kube-api-access-bk9wn\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:37 crc kubenswrapper[5058]: I1014 09:13:37.912506 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ee4bc78-886a-4387-b827-5a1822a278cc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.007107 5058 generic.go:334] "Generic (PLEG): container finished" podID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerID="01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82" exitCode=0 Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.007289 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpx8z" event={"ID":"fac57268-12f0-4aa6-97ca-5b980717ccee","Type":"ContainerDied","Data":"01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82"} Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.017648 5058 generic.go:334] "Generic (PLEG): container finished" podID="2ee4bc78-886a-4387-b827-5a1822a278cc" containerID="8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c" exitCode=0 Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.017699 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" event={"ID":"2ee4bc78-886a-4387-b827-5a1822a278cc","Type":"ContainerDied","Data":"8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c"} Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.017717 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.017737 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d859d846c-b7t7c" event={"ID":"2ee4bc78-886a-4387-b827-5a1822a278cc","Type":"ContainerDied","Data":"93c45a3c4c85c0fae21499a915d42a313b7443f39f81d2349297c567fcf72f7c"} Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.017759 5058 scope.go:117] "RemoveContainer" containerID="8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c" Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.059253 5058 scope.go:117] "RemoveContainer" containerID="29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f" Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.109294 5058 scope.go:117] "RemoveContainer" containerID="8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c" Oct 14 09:13:38 crc kubenswrapper[5058]: E1014 09:13:38.110370 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c\": container with ID starting with 8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c not found: ID does not exist" containerID="8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c" Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.110403 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c"} err="failed to get container status \"8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c\": rpc error: code = NotFound desc = could not find container \"8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c\": container with ID starting with 8b8a5557a46a898ff578e8aac6579496c081fd7fcbc1a8510abc9592c9661a9c not found: ID does not exist" Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.110435 5058 scope.go:117] "RemoveContainer" containerID="29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f" Oct 14 09:13:38 crc kubenswrapper[5058]: E1014 09:13:38.110671 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f\": container with ID starting with 29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f not found: ID does not exist" containerID="29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f" Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.110696 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f"} err="failed to get container status \"29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f\": rpc error: code = NotFound desc = could not find container \"29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f\": container with ID starting with 29ceeebff7a5503a4496bdbbfb36f307b0743d3e8ed228b792936aac1a149a8f not found: ID does not exist" Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.134840 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d859d846c-b7t7c"] Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.170902 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d859d846c-b7t7c"] Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.190139 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cfc7bffb5-j9chd"] Oct 14 09:13:38 crc kubenswrapper[5058]: I1014 09:13:38.803724 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee4bc78-886a-4387-b827-5a1822a278cc" path="/var/lib/kubelet/pods/2ee4bc78-886a-4387-b827-5a1822a278cc/volumes" Oct 14 09:13:39 crc kubenswrapper[5058]: I1014 09:13:39.030945 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpx8z" event={"ID":"fac57268-12f0-4aa6-97ca-5b980717ccee","Type":"ContainerStarted","Data":"33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594"} Oct 14 09:13:39 crc kubenswrapper[5058]: I1014 09:13:39.032210 5058 generic.go:334] "Generic (PLEG): container finished" podID="0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" containerID="0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607" exitCode=0 Oct 14 09:13:39 crc kubenswrapper[5058]: I1014 09:13:39.032247 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" event={"ID":"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6","Type":"ContainerDied","Data":"0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607"} Oct 14 09:13:39 crc kubenswrapper[5058]: I1014 09:13:39.032268 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" event={"ID":"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6","Type":"ContainerStarted","Data":"6f6093c4ab47c245eff6102dd77f7c9fb8581216ed9871a33262496a3cd563a3"} Oct 14 09:13:39 crc kubenswrapper[5058]: I1014 09:13:39.053825 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpx8z" podStartSLOduration=2.604527361 podStartE2EDuration="5.053783929s" podCreationTimestamp="2025-10-14 09:13:34 +0000 UTC" firstStartedPulling="2025-10-14 09:13:35.984457682 +0000 UTC m=+8763.895541498" lastFinishedPulling="2025-10-14 09:13:38.43371426 +0000 UTC m=+8766.344798066" observedRunningTime="2025-10-14 09:13:39.052394369 +0000 UTC m=+8766.963478175" watchObservedRunningTime="2025-10-14 09:13:39.053783929 +0000 UTC m=+8766.964867735" Oct 14 09:13:40 crc kubenswrapper[5058]: I1014 09:13:40.052029 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" event={"ID":"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6","Type":"ContainerStarted","Data":"c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2"} Oct 14 09:13:40 crc kubenswrapper[5058]: I1014 09:13:40.084071 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" podStartSLOduration=3.084045856 podStartE2EDuration="3.084045856s" podCreationTimestamp="2025-10-14 09:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:13:40.077342964 +0000 UTC m=+8767.988426830" watchObservedRunningTime="2025-10-14 09:13:40.084045856 +0000 UTC m=+8767.995129692" Oct 14 09:13:41 crc kubenswrapper[5058]: I1014 09:13:41.069485 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:44 crc kubenswrapper[5058]: I1014 09:13:44.712572 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:44 crc kubenswrapper[5058]: I1014 09:13:44.713113 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:44 crc kubenswrapper[5058]: I1014 09:13:44.760769 5058 scope.go:117] "RemoveContainer" containerID="7df15d0b93cff7d5f6f8bbd495e0aed17ad71fdd48d38837792442db1bdd9850" Oct 14 09:13:44 crc kubenswrapper[5058]: I1014 09:13:44.810999 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:44 crc kubenswrapper[5058]: I1014 09:13:44.813562 5058 scope.go:117] "RemoveContainer" containerID="fb05a9923ad1ca52797b27c80209f83ea1eaf6f1564ee5f11aad423148988171" Oct 14 09:13:44 crc kubenswrapper[5058]: I1014 09:13:44.859183 5058 scope.go:117] "RemoveContainer" containerID="259a8210a3ea80b0d226e3b994fafb64254fe2072b053941265ece8c91cd46a3" Oct 14 09:13:45 crc kubenswrapper[5058]: I1014 09:13:45.188250 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:45 crc kubenswrapper[5058]: I1014 09:13:45.242383 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpx8z"] Oct 14 09:13:46 crc kubenswrapper[5058]: I1014 09:13:46.790670 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:13:46 crc kubenswrapper[5058]: E1014 09:13:46.791366 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.047168 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-48svt"] Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.059768 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-48svt"] Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.144921 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vpx8z" podUID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerName="registry-server" containerID="cri-o://33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594" gracePeriod=2 Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.551007 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.630448 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbb9dd4f-fptk7"] Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.630717 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" podUID="4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" containerName="dnsmasq-dns" containerID="cri-o://45677642ac2bcbe25476c78502be8d9f71418aced6113b2b8143412788159b74" gracePeriod=10 Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.810904 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77df87b869-k4fps"] Oct 14 09:13:47 crc kubenswrapper[5058]: E1014 09:13:47.811765 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee4bc78-886a-4387-b827-5a1822a278cc" containerName="dnsmasq-dns" Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.811782 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee4bc78-886a-4387-b827-5a1822a278cc" containerName="dnsmasq-dns" Oct 14 09:13:47 crc kubenswrapper[5058]: E1014 09:13:47.811800 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee4bc78-886a-4387-b827-5a1822a278cc" containerName="init" Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.811822 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee4bc78-886a-4387-b827-5a1822a278cc" containerName="init" Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.812078 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee4bc78-886a-4387-b827-5a1822a278cc" containerName="dnsmasq-dns" Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.813784 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.825840 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77df87b869-k4fps"] Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.871666 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell2" Oct 14 09:13:47 crc kubenswrapper[5058]: I1014 09:13:47.874932 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.076426 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-catalog-content\") pod \"fac57268-12f0-4aa6-97ca-5b980717ccee\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.076510 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8vdn\" (UniqueName: \"kubernetes.io/projected/fac57268-12f0-4aa6-97ca-5b980717ccee-kube-api-access-h8vdn\") pod \"fac57268-12f0-4aa6-97ca-5b980717ccee\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.076593 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-utilities\") pod \"fac57268-12f0-4aa6-97ca-5b980717ccee\" (UID: \"fac57268-12f0-4aa6-97ca-5b980717ccee\") " Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.076896 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell2\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-openstack-cell2\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.076964 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-dns-svc\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.077043 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-openstack-cell1\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.077089 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-ovsdbserver-sb\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.077143 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-config\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.077179 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-ovsdbserver-nb\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.077228 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfr8g\" (UniqueName: \"kubernetes.io/projected/fb5192ff-af74-4af8-badc-d59affd44082-kube-api-access-cfr8g\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.078231 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-utilities" (OuterVolumeSpecName: "utilities") pod "fac57268-12f0-4aa6-97ca-5b980717ccee" (UID: "fac57268-12f0-4aa6-97ca-5b980717ccee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.087209 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac57268-12f0-4aa6-97ca-5b980717ccee-kube-api-access-h8vdn" (OuterVolumeSpecName: "kube-api-access-h8vdn") pod "fac57268-12f0-4aa6-97ca-5b980717ccee" (UID: "fac57268-12f0-4aa6-97ca-5b980717ccee"). InnerVolumeSpecName "kube-api-access-h8vdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.092827 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fac57268-12f0-4aa6-97ca-5b980717ccee" (UID: "fac57268-12f0-4aa6-97ca-5b980717ccee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.156385 5058 generic.go:334] "Generic (PLEG): container finished" podID="4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" containerID="45677642ac2bcbe25476c78502be8d9f71418aced6113b2b8143412788159b74" exitCode=0 Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.156458 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" event={"ID":"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf","Type":"ContainerDied","Data":"45677642ac2bcbe25476c78502be8d9f71418aced6113b2b8143412788159b74"} Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.156484 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" event={"ID":"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf","Type":"ContainerDied","Data":"2c7e1431f09bbc9de1169cc954a927669e69f1771a3b4abc68cd5690a6492d49"} Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.156496 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c7e1431f09bbc9de1169cc954a927669e69f1771a3b4abc68cd5690a6492d49" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.158494 5058 generic.go:334] "Generic (PLEG): container finished" podID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerID="33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594" exitCode=0 Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.158537 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpx8z" event={"ID":"fac57268-12f0-4aa6-97ca-5b980717ccee","Type":"ContainerDied","Data":"33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594"} Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.158574 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpx8z" event={"ID":"fac57268-12f0-4aa6-97ca-5b980717ccee","Type":"ContainerDied","Data":"c345b252599013b3261e21d9294bece449e2afe321d9ef8136823923a1a826b0"} Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.158599 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpx8z" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.158606 5058 scope.go:117] "RemoveContainer" containerID="33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.178707 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-openstack-cell1\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.178768 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-ovsdbserver-sb\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.179002 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-config\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.179067 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-ovsdbserver-nb\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.179688 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-ovsdbserver-sb\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.179724 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfr8g\" (UniqueName: \"kubernetes.io/projected/fb5192ff-af74-4af8-badc-d59affd44082-kube-api-access-cfr8g\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.179758 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell2\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-openstack-cell2\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.179817 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-dns-svc\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.179840 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-ovsdbserver-nb\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.179899 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.181040 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8vdn\" (UniqueName: \"kubernetes.io/projected/fac57268-12f0-4aa6-97ca-5b980717ccee-kube-api-access-h8vdn\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.181053 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac57268-12f0-4aa6-97ca-5b980717ccee-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.180770 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell2\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-openstack-cell2\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.179913 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-config\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.180587 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-dns-svc\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.181524 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fb5192ff-af74-4af8-badc-d59affd44082-openstack-cell1\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.204834 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfr8g\" (UniqueName: \"kubernetes.io/projected/fb5192ff-af74-4af8-badc-d59affd44082-kube-api-access-cfr8g\") pod \"dnsmasq-dns-77df87b869-k4fps\" (UID: \"fb5192ff-af74-4af8-badc-d59affd44082\") " pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.207419 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.218168 5058 scope.go:117] "RemoveContainer" containerID="01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.231558 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpx8z"] Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.259023 5058 scope.go:117] "RemoveContainer" containerID="cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.268464 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpx8z"] Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.282094 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-sb\") pod \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.282187 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-openstack-cell1\") pod \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.282221 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz4fw\" (UniqueName: \"kubernetes.io/projected/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-kube-api-access-vz4fw\") pod \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.282277 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-config\") pod \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.282312 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-dns-svc\") pod \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.282336 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-nb\") pod \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\" (UID: \"4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf\") " Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.288458 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-kube-api-access-vz4fw" (OuterVolumeSpecName: "kube-api-access-vz4fw") pod "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" (UID: "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf"). InnerVolumeSpecName "kube-api-access-vz4fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.334226 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" (UID: "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.334447 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" (UID: "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.347309 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" (UID: "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.348455 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" (UID: "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.354272 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-config" (OuterVolumeSpecName: "config") pod "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" (UID: "4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.385442 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.385753 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-openstack-cell1\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.385862 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz4fw\" (UniqueName: \"kubernetes.io/projected/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-kube-api-access-vz4fw\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.385965 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.386038 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.386106 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.435861 5058 scope.go:117] "RemoveContainer" containerID="33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594" Oct 14 09:13:48 crc kubenswrapper[5058]: E1014 09:13:48.436411 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594\": container with ID starting with 33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594 not found: ID does not exist" containerID="33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.436461 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594"} err="failed to get container status \"33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594\": rpc error: code = NotFound desc = could not find container \"33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594\": container with ID starting with 33eab7a339451c463c23903609aee71a1f1bd997c64d137698319c8860a2a594 not found: ID does not exist" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.436493 5058 scope.go:117] "RemoveContainer" containerID="01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82" Oct 14 09:13:48 crc kubenswrapper[5058]: E1014 09:13:48.437142 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82\": container with ID starting with 01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82 not found: ID does not exist" containerID="01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.437186 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82"} err="failed to get container status \"01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82\": rpc error: code = NotFound desc = could not find container \"01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82\": container with ID starting with 01ac99bc462bc7e3f20d8013e0d8240930a08c6c782d2eacfdac52af50830d82 not found: ID does not exist" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.437216 5058 scope.go:117] "RemoveContainer" containerID="cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103" Oct 14 09:13:48 crc kubenswrapper[5058]: E1014 09:13:48.437737 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103\": container with ID starting with cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103 not found: ID does not exist" containerID="cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.437769 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103"} err="failed to get container status \"cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103\": rpc error: code = NotFound desc = could not find container \"cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103\": container with ID starting with cf1259cd0e9abcb356d80ddd28b5e2720cd51326e8d7858d9c78b01589e0e103 not found: ID does not exist" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.484917 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.806175 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba118ad4-0e58-4e21-957a-70e154050f57" path="/var/lib/kubelet/pods/ba118ad4-0e58-4e21-957a-70e154050f57/volumes" Oct 14 09:13:48 crc kubenswrapper[5058]: I1014 09:13:48.807410 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac57268-12f0-4aa6-97ca-5b980717ccee" path="/var/lib/kubelet/pods/fac57268-12f0-4aa6-97ca-5b980717ccee/volumes" Oct 14 09:13:49 crc kubenswrapper[5058]: I1014 09:13:49.066671 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77df87b869-k4fps"] Oct 14 09:13:49 crc kubenswrapper[5058]: I1014 09:13:49.271019 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbb9dd4f-fptk7" Oct 14 09:13:49 crc kubenswrapper[5058]: I1014 09:13:49.272023 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77df87b869-k4fps" event={"ID":"fb5192ff-af74-4af8-badc-d59affd44082","Type":"ContainerStarted","Data":"ddd208b38775583de07c260497df4f5a35ad461fd0e0b9355c58141cd4c10da5"} Oct 14 09:13:49 crc kubenswrapper[5058]: I1014 09:13:49.391358 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbb9dd4f-fptk7"] Oct 14 09:13:49 crc kubenswrapper[5058]: I1014 09:13:49.422521 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbb9dd4f-fptk7"] Oct 14 09:13:50 crc kubenswrapper[5058]: I1014 09:13:50.291302 5058 generic.go:334] "Generic (PLEG): container finished" podID="fb5192ff-af74-4af8-badc-d59affd44082" containerID="1967e78670d63773edf7963730f0e8fea5b5e5e7c2ef5f99e82826ed6deb880a" exitCode=0 Oct 14 09:13:50 crc kubenswrapper[5058]: I1014 09:13:50.291367 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77df87b869-k4fps" event={"ID":"fb5192ff-af74-4af8-badc-d59affd44082","Type":"ContainerDied","Data":"1967e78670d63773edf7963730f0e8fea5b5e5e7c2ef5f99e82826ed6deb880a"} Oct 14 09:13:50 crc kubenswrapper[5058]: I1014 09:13:50.802500 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" path="/var/lib/kubelet/pods/4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf/volumes" Oct 14 09:13:51 crc kubenswrapper[5058]: I1014 09:13:51.310156 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77df87b869-k4fps" event={"ID":"fb5192ff-af74-4af8-badc-d59affd44082","Type":"ContainerStarted","Data":"1c8c029f365fcd0dbbaebd5e1c664de8e1c854a359d790725c1679281f6edde3"} Oct 14 09:13:51 crc kubenswrapper[5058]: I1014 09:13:51.312084 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:51 crc kubenswrapper[5058]: I1014 09:13:51.350768 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77df87b869-k4fps" podStartSLOduration=4.350742147 podStartE2EDuration="4.350742147s" podCreationTimestamp="2025-10-14 09:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:13:51.335570754 +0000 UTC m=+8779.246654600" watchObservedRunningTime="2025-10-14 09:13:51.350742147 +0000 UTC m=+8779.261825983" Oct 14 09:13:57 crc kubenswrapper[5058]: I1014 09:13:57.057355 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-784e-account-create-wvxwt"] Oct 14 09:13:57 crc kubenswrapper[5058]: I1014 09:13:57.069254 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-784e-account-create-wvxwt"] Oct 14 09:13:58 crc kubenswrapper[5058]: I1014 09:13:58.488056 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77df87b869-k4fps" Oct 14 09:13:58 crc kubenswrapper[5058]: I1014 09:13:58.566706 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cfc7bffb5-j9chd"] Oct 14 09:13:58 crc kubenswrapper[5058]: I1014 09:13:58.567058 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" podUID="0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" containerName="dnsmasq-dns" containerID="cri-o://c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2" gracePeriod=10 Oct 14 09:13:58 crc kubenswrapper[5058]: I1014 09:13:58.790584 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:13:58 crc kubenswrapper[5058]: E1014 09:13:58.790784 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:13:58 crc kubenswrapper[5058]: I1014 09:13:58.801770 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dfff069-4a26-4862-931b-a63aafa9d8ea" path="/var/lib/kubelet/pods/5dfff069-4a26-4862-931b-a63aafa9d8ea/volumes" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.215327 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.342662 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-config\") pod \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.342774 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-dns-svc\") pod \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.342836 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-openstack-cell1\") pod \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.342979 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-sb\") pod \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.343071 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-nb\") pod \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.343155 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf245\" (UniqueName: \"kubernetes.io/projected/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-kube-api-access-nf245\") pod \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\" (UID: \"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6\") " Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.350753 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-kube-api-access-nf245" (OuterVolumeSpecName: "kube-api-access-nf245") pod "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" (UID: "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6"). InnerVolumeSpecName "kube-api-access-nf245". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.419736 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" (UID: "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.419950 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" (UID: "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.420169 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-config" (OuterVolumeSpecName: "config") pod "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" (UID: "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.420969 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" (UID: "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.426783 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" (UID: "0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.445372 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.445410 5058 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.445423 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf245\" (UniqueName: \"kubernetes.io/projected/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-kube-api-access-nf245\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.445440 5058 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-config\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.445450 5058 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.445460 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6-openstack-cell1\") on node \"crc\" DevicePath \"\"" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.459108 5058 generic.go:334] "Generic (PLEG): container finished" podID="0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" containerID="c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2" exitCode=0 Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.459161 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" event={"ID":"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6","Type":"ContainerDied","Data":"c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2"} Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.459195 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" event={"ID":"0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6","Type":"ContainerDied","Data":"6f6093c4ab47c245eff6102dd77f7c9fb8581216ed9871a33262496a3cd563a3"} Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.459217 5058 scope.go:117] "RemoveContainer" containerID="c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.459393 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfc7bffb5-j9chd" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.464811 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4"] Oct 14 09:13:59 crc kubenswrapper[5058]: E1014 09:13:59.465318 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" containerName="init" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.465340 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" containerName="init" Oct 14 09:13:59 crc kubenswrapper[5058]: E1014 09:13:59.465361 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerName="extract-utilities" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.465369 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerName="extract-utilities" Oct 14 09:13:59 crc kubenswrapper[5058]: E1014 09:13:59.465382 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerName="registry-server" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.465390 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerName="registry-server" Oct 14 09:13:59 crc kubenswrapper[5058]: E1014 09:13:59.468000 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" containerName="init" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.468031 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" containerName="init" Oct 14 09:13:59 crc kubenswrapper[5058]: E1014 09:13:59.468065 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" containerName="dnsmasq-dns" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.468074 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" containerName="dnsmasq-dns" Oct 14 09:13:59 crc kubenswrapper[5058]: E1014 09:13:59.468150 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerName="extract-content" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.468185 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerName="extract-content" Oct 14 09:13:59 crc kubenswrapper[5058]: E1014 09:13:59.468205 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" containerName="dnsmasq-dns" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.468213 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" containerName="dnsmasq-dns" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.468697 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" containerName="dnsmasq-dns" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.468728 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd3722d-6b9a-49e6-9a15-cf77fb6f58cf" containerName="dnsmasq-dns" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.468751 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac57268-12f0-4aa6-97ca-5b980717ccee" containerName="registry-server" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.469748 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.471789 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.475344 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.475699 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.475836 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.487407 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56"] Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.489744 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.491952 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.507107 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4"] Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.514074 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.515105 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56"] Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.536975 5058 scope.go:117] "RemoveContainer" containerID="0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.547294 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.547479 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.547537 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.547598 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxs8r\" (UniqueName: \"kubernetes.io/projected/9138e077-43e6-4e16-b9c9-83b3c807b456-kube-api-access-jxs8r\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.562224 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cfc7bffb5-j9chd"] Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.567707 5058 scope.go:117] "RemoveContainer" containerID="c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2" Oct 14 09:13:59 crc kubenswrapper[5058]: E1014 09:13:59.568200 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2\": container with ID starting with c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2 not found: ID does not exist" containerID="c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.568353 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2"} err="failed to get container status \"c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2\": rpc error: code = NotFound desc = could not find container \"c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2\": container with ID starting with c953c5af1834a46cf56ec849c221c07d088964ebedcb7cf8dee7ae69513accf2 not found: ID does not exist" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.568451 5058 scope.go:117] "RemoveContainer" containerID="0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607" Oct 14 09:13:59 crc kubenswrapper[5058]: E1014 09:13:59.568752 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607\": container with ID starting with 0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607 not found: ID does not exist" containerID="0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.568856 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607"} err="failed to get container status \"0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607\": rpc error: code = NotFound desc = could not find container \"0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607\": container with ID starting with 0a8534119739355c94c66953dde495fbedbfaff2bc5256e2e7f6417aead27607 not found: ID does not exist" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.569902 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cfc7bffb5-j9chd"] Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.649807 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.650007 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.650137 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.650221 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.650335 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjwh4\" (UniqueName: \"kubernetes.io/projected/8e3244f3-1a76-4003-8e1a-62b390032bd2-kube-api-access-hjwh4\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.650431 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.650517 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.650608 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxs8r\" (UniqueName: \"kubernetes.io/projected/9138e077-43e6-4e16-b9c9-83b3c807b456-kube-api-access-jxs8r\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.655166 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.655565 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.655896 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.667922 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxs8r\" (UniqueName: \"kubernetes.io/projected/9138e077-43e6-4e16-b9c9-83b3c807b456-kube-api-access-jxs8r\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.752858 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.753080 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.753229 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjwh4\" (UniqueName: \"kubernetes.io/projected/8e3244f3-1a76-4003-8e1a-62b390032bd2-kube-api-access-hjwh4\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.753430 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.756724 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.757165 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.757895 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.772081 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjwh4\" (UniqueName: \"kubernetes.io/projected/8e3244f3-1a76-4003-8e1a-62b390032bd2-kube-api-access-hjwh4\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.847536 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:13:59 crc kubenswrapper[5058]: I1014 09:13:59.859280 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:14:00 crc kubenswrapper[5058]: I1014 09:14:00.684990 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56"] Oct 14 09:14:00 crc kubenswrapper[5058]: I1014 09:14:00.803151 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6" path="/var/lib/kubelet/pods/0fa6a2ba-31bf-47b4-9e40-40fe5e07afc6/volumes" Oct 14 09:14:01 crc kubenswrapper[5058]: W1014 09:14:01.256585 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9138e077_43e6_4e16_b9c9_83b3c807b456.slice/crio-2d1573b65b52249b9b40f2633e2aaf3b77b379d606f2c2256d5fcaeac36db4a7 WatchSource:0}: Error finding container 2d1573b65b52249b9b40f2633e2aaf3b77b379d606f2c2256d5fcaeac36db4a7: Status 404 returned error can't find the container with id 2d1573b65b52249b9b40f2633e2aaf3b77b379d606f2c2256d5fcaeac36db4a7 Oct 14 09:14:01 crc kubenswrapper[5058]: I1014 09:14:01.260237 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4"] Oct 14 09:14:01 crc kubenswrapper[5058]: I1014 09:14:01.485181 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" event={"ID":"8e3244f3-1a76-4003-8e1a-62b390032bd2","Type":"ContainerStarted","Data":"53ea67eaf2fb6e75a0082022ae29ce1d25c0270d9e52d064d15ae1ae042b2c99"} Oct 14 09:14:01 crc kubenswrapper[5058]: I1014 09:14:01.487271 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" event={"ID":"9138e077-43e6-4e16-b9c9-83b3c807b456","Type":"ContainerStarted","Data":"2d1573b65b52249b9b40f2633e2aaf3b77b379d606f2c2256d5fcaeac36db4a7"} Oct 14 09:14:10 crc kubenswrapper[5058]: I1014 09:14:10.603466 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" event={"ID":"9138e077-43e6-4e16-b9c9-83b3c807b456","Type":"ContainerStarted","Data":"d7398fd4e8dd0638b6340b5eea641706d79f03d6bc26a39b3208c9a8f32374bc"} Oct 14 09:14:10 crc kubenswrapper[5058]: I1014 09:14:10.606437 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" event={"ID":"8e3244f3-1a76-4003-8e1a-62b390032bd2","Type":"ContainerStarted","Data":"8b877d747a2f416cf688e0f1bc1c7a80f6570c9715c601e0a4f317c4ee1e08f9"} Oct 14 09:14:10 crc kubenswrapper[5058]: I1014 09:14:10.626885 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" podStartSLOduration=3.343244659 podStartE2EDuration="11.62686889s" podCreationTimestamp="2025-10-14 09:13:59 +0000 UTC" firstStartedPulling="2025-10-14 09:14:01.259147323 +0000 UTC m=+8789.170231139" lastFinishedPulling="2025-10-14 09:14:09.542771564 +0000 UTC m=+8797.453855370" observedRunningTime="2025-10-14 09:14:10.622726051 +0000 UTC m=+8798.533809897" watchObservedRunningTime="2025-10-14 09:14:10.62686889 +0000 UTC m=+8798.537952696" Oct 14 09:14:10 crc kubenswrapper[5058]: I1014 09:14:10.649298 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" podStartSLOduration=2.818426054 podStartE2EDuration="11.6492777s" podCreationTimestamp="2025-10-14 09:13:59 +0000 UTC" firstStartedPulling="2025-10-14 09:14:00.694900501 +0000 UTC m=+8788.605984307" lastFinishedPulling="2025-10-14 09:14:09.525752107 +0000 UTC m=+8797.436835953" observedRunningTime="2025-10-14 09:14:10.645139962 +0000 UTC m=+8798.556223768" watchObservedRunningTime="2025-10-14 09:14:10.6492777 +0000 UTC m=+8798.560361496" Oct 14 09:14:12 crc kubenswrapper[5058]: I1014 09:14:12.797077 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:14:12 crc kubenswrapper[5058]: E1014 09:14:12.797634 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:14:21 crc kubenswrapper[5058]: I1014 09:14:21.042414 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7tvpt"] Oct 14 09:14:21 crc kubenswrapper[5058]: I1014 09:14:21.051223 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7tvpt"] Oct 14 09:14:21 crc kubenswrapper[5058]: I1014 09:14:21.738624 5058 generic.go:334] "Generic (PLEG): container finished" podID="9138e077-43e6-4e16-b9c9-83b3c807b456" containerID="d7398fd4e8dd0638b6340b5eea641706d79f03d6bc26a39b3208c9a8f32374bc" exitCode=0 Oct 14 09:14:21 crc kubenswrapper[5058]: I1014 09:14:21.738716 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" event={"ID":"9138e077-43e6-4e16-b9c9-83b3c807b456","Type":"ContainerDied","Data":"d7398fd4e8dd0638b6340b5eea641706d79f03d6bc26a39b3208c9a8f32374bc"} Oct 14 09:14:21 crc kubenswrapper[5058]: I1014 09:14:21.741705 5058 generic.go:334] "Generic (PLEG): container finished" podID="8e3244f3-1a76-4003-8e1a-62b390032bd2" containerID="8b877d747a2f416cf688e0f1bc1c7a80f6570c9715c601e0a4f317c4ee1e08f9" exitCode=0 Oct 14 09:14:21 crc kubenswrapper[5058]: I1014 09:14:21.741757 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" event={"ID":"8e3244f3-1a76-4003-8e1a-62b390032bd2","Type":"ContainerDied","Data":"8b877d747a2f416cf688e0f1bc1c7a80f6570c9715c601e0a4f317c4ee1e08f9"} Oct 14 09:14:22 crc kubenswrapper[5058]: I1014 09:14:22.809221 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af3692ec-bee1-4096-bb7d-b50388aab98b" path="/var/lib/kubelet/pods/af3692ec-bee1-4096-bb7d-b50388aab98b/volumes" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.215108 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.400236 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-ssh-key\") pod \"9138e077-43e6-4e16-b9c9-83b3c807b456\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.400668 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxs8r\" (UniqueName: \"kubernetes.io/projected/9138e077-43e6-4e16-b9c9-83b3c807b456-kube-api-access-jxs8r\") pod \"9138e077-43e6-4e16-b9c9-83b3c807b456\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.400706 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-pre-adoption-validation-combined-ca-bundle\") pod \"9138e077-43e6-4e16-b9c9-83b3c807b456\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.400728 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-inventory\") pod \"9138e077-43e6-4e16-b9c9-83b3c807b456\" (UID: \"9138e077-43e6-4e16-b9c9-83b3c807b456\") " Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.419154 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "9138e077-43e6-4e16-b9c9-83b3c807b456" (UID: "9138e077-43e6-4e16-b9c9-83b3c807b456"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.422234 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9138e077-43e6-4e16-b9c9-83b3c807b456-kube-api-access-jxs8r" (OuterVolumeSpecName: "kube-api-access-jxs8r") pod "9138e077-43e6-4e16-b9c9-83b3c807b456" (UID: "9138e077-43e6-4e16-b9c9-83b3c807b456"). InnerVolumeSpecName "kube-api-access-jxs8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.443772 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-inventory" (OuterVolumeSpecName: "inventory") pod "9138e077-43e6-4e16-b9c9-83b3c807b456" (UID: "9138e077-43e6-4e16-b9c9-83b3c807b456"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.457142 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9138e077-43e6-4e16-b9c9-83b3c807b456" (UID: "9138e077-43e6-4e16-b9c9-83b3c807b456"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.503536 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.503575 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxs8r\" (UniqueName: \"kubernetes.io/projected/9138e077-43e6-4e16-b9c9-83b3c807b456-kube-api-access-jxs8r\") on node \"crc\" DevicePath \"\"" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.503588 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.503602 5058 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9138e077-43e6-4e16-b9c9-83b3c807b456-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.734643 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.764161 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" event={"ID":"9138e077-43e6-4e16-b9c9-83b3c807b456","Type":"ContainerDied","Data":"2d1573b65b52249b9b40f2633e2aaf3b77b379d606f2c2256d5fcaeac36db4a7"} Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.764205 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1573b65b52249b9b40f2633e2aaf3b77b379d606f2c2256d5fcaeac36db4a7" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.764268 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.770302 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" event={"ID":"8e3244f3-1a76-4003-8e1a-62b390032bd2","Type":"ContainerDied","Data":"53ea67eaf2fb6e75a0082022ae29ce1d25c0270d9e52d064d15ae1ae042b2c99"} Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.770342 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ea67eaf2fb6e75a0082022ae29ce1d25c0270d9e52d064d15ae1ae042b2c99" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.770401 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.911457 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjwh4\" (UniqueName: \"kubernetes.io/projected/8e3244f3-1a76-4003-8e1a-62b390032bd2-kube-api-access-hjwh4\") pod \"8e3244f3-1a76-4003-8e1a-62b390032bd2\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.911504 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-inventory\") pod \"8e3244f3-1a76-4003-8e1a-62b390032bd2\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.911532 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-pre-adoption-validation-combined-ca-bundle\") pod \"8e3244f3-1a76-4003-8e1a-62b390032bd2\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.911683 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-ssh-key\") pod \"8e3244f3-1a76-4003-8e1a-62b390032bd2\" (UID: \"8e3244f3-1a76-4003-8e1a-62b390032bd2\") " Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.918543 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "8e3244f3-1a76-4003-8e1a-62b390032bd2" (UID: "8e3244f3-1a76-4003-8e1a-62b390032bd2"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.918622 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3244f3-1a76-4003-8e1a-62b390032bd2-kube-api-access-hjwh4" (OuterVolumeSpecName: "kube-api-access-hjwh4") pod "8e3244f3-1a76-4003-8e1a-62b390032bd2" (UID: "8e3244f3-1a76-4003-8e1a-62b390032bd2"). InnerVolumeSpecName "kube-api-access-hjwh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.941878 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-inventory" (OuterVolumeSpecName: "inventory") pod "8e3244f3-1a76-4003-8e1a-62b390032bd2" (UID: "8e3244f3-1a76-4003-8e1a-62b390032bd2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:14:23 crc kubenswrapper[5058]: I1014 09:14:23.944367 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8e3244f3-1a76-4003-8e1a-62b390032bd2" (UID: "8e3244f3-1a76-4003-8e1a-62b390032bd2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:14:24 crc kubenswrapper[5058]: I1014 09:14:24.013827 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjwh4\" (UniqueName: \"kubernetes.io/projected/8e3244f3-1a76-4003-8e1a-62b390032bd2-kube-api-access-hjwh4\") on node \"crc\" DevicePath \"\"" Oct 14 09:14:24 crc kubenswrapper[5058]: I1014 09:14:24.013855 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:14:24 crc kubenswrapper[5058]: I1014 09:14:24.013865 5058 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:14:24 crc kubenswrapper[5058]: I1014 09:14:24.013875 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e3244f3-1a76-4003-8e1a-62b390032bd2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:14:26 crc kubenswrapper[5058]: I1014 09:14:26.792994 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:14:26 crc kubenswrapper[5058]: E1014 09:14:26.793728 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.899622 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4"] Oct 14 09:14:32 crc kubenswrapper[5058]: E1014 09:14:32.900651 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9138e077-43e6-4e16-b9c9-83b3c807b456" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell2" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.900668 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9138e077-43e6-4e16-b9c9-83b3c807b456" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell2" Oct 14 09:14:32 crc kubenswrapper[5058]: E1014 09:14:32.900724 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3244f3-1a76-4003-8e1a-62b390032bd2" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.900733 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3244f3-1a76-4003-8e1a-62b390032bd2" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.901036 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3244f3-1a76-4003-8e1a-62b390032bd2" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.901057 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9138e077-43e6-4e16-b9c9-83b3c807b456" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell2" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.901849 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.908082 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.908158 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.908451 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.908465 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.912613 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2"] Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.917577 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.920508 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.920769 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.922284 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4"] Oct 14 09:14:32 crc kubenswrapper[5058]: I1014 09:14:32.933985 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2"] Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.017131 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.017192 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.017222 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wm5f\" (UniqueName: \"kubernetes.io/projected/4e159776-52a9-419f-936f-4a921173dd04-kube-api-access-7wm5f\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.017404 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.017523 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.017574 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.017840 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.017880 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf2bf\" (UniqueName: \"kubernetes.io/projected/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-kube-api-access-mf2bf\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.119464 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.119511 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wm5f\" (UniqueName: \"kubernetes.io/projected/4e159776-52a9-419f-936f-4a921173dd04-kube-api-access-7wm5f\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.119564 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.119604 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.119631 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.119700 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.119722 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf2bf\" (UniqueName: \"kubernetes.io/projected/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-kube-api-access-mf2bf\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.119769 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.126262 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.127480 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.128725 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.128869 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.132105 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.142483 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.158393 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf2bf\" (UniqueName: \"kubernetes.io/projected/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-kube-api-access-mf2bf\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.161439 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wm5f\" (UniqueName: \"kubernetes.io/projected/4e159776-52a9-419f-936f-4a921173dd04-kube-api-access-7wm5f\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.259476 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.269709 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.880266 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4"] Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.901084 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" event={"ID":"f0c0452c-9556-4225-89fd-d5a07ad5fbfa","Type":"ContainerStarted","Data":"b2106a9cbe838791367fae181cf88e8cac9fcbf041bbbecee46fb75efa860b1f"} Oct 14 09:14:33 crc kubenswrapper[5058]: I1014 09:14:33.961766 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2"] Oct 14 09:14:34 crc kubenswrapper[5058]: W1014 09:14:34.599973 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e159776_52a9_419f_936f_4a921173dd04.slice/crio-082e1f78fb7b5e765027c3aca9ce0f988804b9a7883801bff4abb75e1e347bf2 WatchSource:0}: Error finding container 082e1f78fb7b5e765027c3aca9ce0f988804b9a7883801bff4abb75e1e347bf2: Status 404 returned error can't find the container with id 082e1f78fb7b5e765027c3aca9ce0f988804b9a7883801bff4abb75e1e347bf2 Oct 14 09:14:34 crc kubenswrapper[5058]: I1014 09:14:34.913812 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" event={"ID":"4e159776-52a9-419f-936f-4a921173dd04","Type":"ContainerStarted","Data":"082e1f78fb7b5e765027c3aca9ce0f988804b9a7883801bff4abb75e1e347bf2"} Oct 14 09:14:35 crc kubenswrapper[5058]: I1014 09:14:35.932504 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" event={"ID":"4e159776-52a9-419f-936f-4a921173dd04","Type":"ContainerStarted","Data":"120b297f695233b2fd4d9064092e430778225745ad9dc4ce0fc27e3d797d47a1"} Oct 14 09:14:35 crc kubenswrapper[5058]: I1014 09:14:35.934064 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" event={"ID":"f0c0452c-9556-4225-89fd-d5a07ad5fbfa","Type":"ContainerStarted","Data":"88cfa36baf726109b9b8f08cc30497e3dc02b4b29608a8c9e5ad166e136c60da"} Oct 14 09:14:35 crc kubenswrapper[5058]: I1014 09:14:35.948949 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" podStartSLOduration=3.178682972 podStartE2EDuration="3.948930005s" podCreationTimestamp="2025-10-14 09:14:32 +0000 UTC" firstStartedPulling="2025-10-14 09:14:34.605393781 +0000 UTC m=+8822.516477597" lastFinishedPulling="2025-10-14 09:14:35.375640784 +0000 UTC m=+8823.286724630" observedRunningTime="2025-10-14 09:14:35.945016253 +0000 UTC m=+8823.856100059" watchObservedRunningTime="2025-10-14 09:14:35.948930005 +0000 UTC m=+8823.860013811" Oct 14 09:14:35 crc kubenswrapper[5058]: I1014 09:14:35.968672 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" podStartSLOduration=3.214485216 podStartE2EDuration="3.968655399s" podCreationTimestamp="2025-10-14 09:14:32 +0000 UTC" firstStartedPulling="2025-10-14 09:14:33.8789091 +0000 UTC m=+8821.789992946" lastFinishedPulling="2025-10-14 09:14:34.633079313 +0000 UTC m=+8822.544163129" observedRunningTime="2025-10-14 09:14:35.962945306 +0000 UTC m=+8823.874029112" watchObservedRunningTime="2025-10-14 09:14:35.968655399 +0000 UTC m=+8823.879739205" Oct 14 09:14:38 crc kubenswrapper[5058]: I1014 09:14:38.790742 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:14:39 crc kubenswrapper[5058]: I1014 09:14:39.989915 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"cd0cc303249430ec985841e3dee26e8a2bbfcb68a99cfd1c2d52543cd0fb8353"} Oct 14 09:14:45 crc kubenswrapper[5058]: I1014 09:14:45.016996 5058 scope.go:117] "RemoveContainer" containerID="97e6ec4bc6f5195919b73b02ddcc547103828b7668659707a2d28217f69c93a6" Oct 14 09:14:45 crc kubenswrapper[5058]: I1014 09:14:45.077597 5058 scope.go:117] "RemoveContainer" containerID="4621a49938f4fc0fa6dfb437ed0c2cb1e2820f94912910415e72278be0d7e30a" Oct 14 09:14:45 crc kubenswrapper[5058]: I1014 09:14:45.275463 5058 scope.go:117] "RemoveContainer" containerID="c0202b1f68c9439aff6fa49f14ca51ebd5fc81198aa4377d05679bf3ce55725c" Oct 14 09:14:45 crc kubenswrapper[5058]: I1014 09:14:45.333174 5058 scope.go:117] "RemoveContainer" containerID="b1fa8409ea3951fb1af915d0fceae9dba5d0f01bcf040665961dc95997252719" Oct 14 09:14:45 crc kubenswrapper[5058]: I1014 09:14:45.358259 5058 scope.go:117] "RemoveContainer" containerID="4aa20c779b855c39be85093dffc89e4301f0ad5b479d68bd98719f956fa2bbd2" Oct 14 09:14:51 crc kubenswrapper[5058]: I1014 09:14:51.059781 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6hm4l"] Oct 14 09:14:51 crc kubenswrapper[5058]: I1014 09:14:51.071208 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6hm4l"] Oct 14 09:14:52 crc kubenswrapper[5058]: I1014 09:14:52.812584 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcba8df7-31f3-4c31-84ef-0c687e1ad241" path="/var/lib/kubelet/pods/dcba8df7-31f3-4c31-84ef-0c687e1ad241/volumes" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.182217 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6"] Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.186531 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.191057 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.191861 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.202492 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6"] Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.364105 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kzgg\" (UniqueName: \"kubernetes.io/projected/96ba45f1-d2f5-4b78-8936-aed215eea0ef-kube-api-access-2kzgg\") pod \"collect-profiles-29340555-h6dv6\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.364351 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96ba45f1-d2f5-4b78-8936-aed215eea0ef-config-volume\") pod \"collect-profiles-29340555-h6dv6\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.364394 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96ba45f1-d2f5-4b78-8936-aed215eea0ef-secret-volume\") pod \"collect-profiles-29340555-h6dv6\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.466959 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96ba45f1-d2f5-4b78-8936-aed215eea0ef-config-volume\") pod \"collect-profiles-29340555-h6dv6\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.467008 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96ba45f1-d2f5-4b78-8936-aed215eea0ef-secret-volume\") pod \"collect-profiles-29340555-h6dv6\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.467146 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kzgg\" (UniqueName: \"kubernetes.io/projected/96ba45f1-d2f5-4b78-8936-aed215eea0ef-kube-api-access-2kzgg\") pod \"collect-profiles-29340555-h6dv6\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.468271 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96ba45f1-d2f5-4b78-8936-aed215eea0ef-config-volume\") pod \"collect-profiles-29340555-h6dv6\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.475995 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96ba45f1-d2f5-4b78-8936-aed215eea0ef-secret-volume\") pod \"collect-profiles-29340555-h6dv6\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.486791 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kzgg\" (UniqueName: \"kubernetes.io/projected/96ba45f1-d2f5-4b78-8936-aed215eea0ef-kube-api-access-2kzgg\") pod \"collect-profiles-29340555-h6dv6\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:00 crc kubenswrapper[5058]: I1014 09:15:00.532493 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:01 crc kubenswrapper[5058]: I1014 09:15:01.039078 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7e14-account-create-mj6zp"] Oct 14 09:15:01 crc kubenswrapper[5058]: I1014 09:15:01.050862 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6"] Oct 14 09:15:01 crc kubenswrapper[5058]: I1014 09:15:01.058876 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7e14-account-create-mj6zp"] Oct 14 09:15:01 crc kubenswrapper[5058]: I1014 09:15:01.294377 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" event={"ID":"96ba45f1-d2f5-4b78-8936-aed215eea0ef","Type":"ContainerStarted","Data":"6fcb5718c38d1bfedf17c258d9730cb2a4206613974fe13989fbcf855618755b"} Oct 14 09:15:01 crc kubenswrapper[5058]: I1014 09:15:01.294454 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" event={"ID":"96ba45f1-d2f5-4b78-8936-aed215eea0ef","Type":"ContainerStarted","Data":"8aaef1f0cc8ca6ecd6b8cc2e580eedc034e7f201a07f4840e6cdd195ba5c6903"} Oct 14 09:15:01 crc kubenswrapper[5058]: I1014 09:15:01.313885 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" podStartSLOduration=1.313859206 podStartE2EDuration="1.313859206s" podCreationTimestamp="2025-10-14 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:15:01.309865262 +0000 UTC m=+8849.220949078" watchObservedRunningTime="2025-10-14 09:15:01.313859206 +0000 UTC m=+8849.224943052" Oct 14 09:15:02 crc kubenswrapper[5058]: I1014 09:15:02.313020 5058 generic.go:334] "Generic (PLEG): container finished" podID="96ba45f1-d2f5-4b78-8936-aed215eea0ef" containerID="6fcb5718c38d1bfedf17c258d9730cb2a4206613974fe13989fbcf855618755b" exitCode=0 Oct 14 09:15:02 crc kubenswrapper[5058]: I1014 09:15:02.313244 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" event={"ID":"96ba45f1-d2f5-4b78-8936-aed215eea0ef","Type":"ContainerDied","Data":"6fcb5718c38d1bfedf17c258d9730cb2a4206613974fe13989fbcf855618755b"} Oct 14 09:15:02 crc kubenswrapper[5058]: I1014 09:15:02.810373 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad1878c-a41e-4233-9828-414e241e1a70" path="/var/lib/kubelet/pods/aad1878c-a41e-4233-9828-414e241e1a70/volumes" Oct 14 09:15:03 crc kubenswrapper[5058]: I1014 09:15:03.826249 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:03 crc kubenswrapper[5058]: I1014 09:15:03.968614 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96ba45f1-d2f5-4b78-8936-aed215eea0ef-secret-volume\") pod \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " Oct 14 09:15:03 crc kubenswrapper[5058]: I1014 09:15:03.969375 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kzgg\" (UniqueName: \"kubernetes.io/projected/96ba45f1-d2f5-4b78-8936-aed215eea0ef-kube-api-access-2kzgg\") pod \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " Oct 14 09:15:03 crc kubenswrapper[5058]: I1014 09:15:03.969468 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96ba45f1-d2f5-4b78-8936-aed215eea0ef-config-volume\") pod \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\" (UID: \"96ba45f1-d2f5-4b78-8936-aed215eea0ef\") " Oct 14 09:15:03 crc kubenswrapper[5058]: I1014 09:15:03.970250 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96ba45f1-d2f5-4b78-8936-aed215eea0ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "96ba45f1-d2f5-4b78-8936-aed215eea0ef" (UID: "96ba45f1-d2f5-4b78-8936-aed215eea0ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:15:03 crc kubenswrapper[5058]: I1014 09:15:03.971319 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96ba45f1-d2f5-4b78-8936-aed215eea0ef-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:15:03 crc kubenswrapper[5058]: I1014 09:15:03.975772 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ba45f1-d2f5-4b78-8936-aed215eea0ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "96ba45f1-d2f5-4b78-8936-aed215eea0ef" (UID: "96ba45f1-d2f5-4b78-8936-aed215eea0ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:15:03 crc kubenswrapper[5058]: I1014 09:15:03.981775 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ba45f1-d2f5-4b78-8936-aed215eea0ef-kube-api-access-2kzgg" (OuterVolumeSpecName: "kube-api-access-2kzgg") pod "96ba45f1-d2f5-4b78-8936-aed215eea0ef" (UID: "96ba45f1-d2f5-4b78-8936-aed215eea0ef"). InnerVolumeSpecName "kube-api-access-2kzgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:15:04 crc kubenswrapper[5058]: I1014 09:15:04.073434 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kzgg\" (UniqueName: \"kubernetes.io/projected/96ba45f1-d2f5-4b78-8936-aed215eea0ef-kube-api-access-2kzgg\") on node \"crc\" DevicePath \"\"" Oct 14 09:15:04 crc kubenswrapper[5058]: I1014 09:15:04.073469 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96ba45f1-d2f5-4b78-8936-aed215eea0ef-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:15:04 crc kubenswrapper[5058]: I1014 09:15:04.348399 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" event={"ID":"96ba45f1-d2f5-4b78-8936-aed215eea0ef","Type":"ContainerDied","Data":"8aaef1f0cc8ca6ecd6b8cc2e580eedc034e7f201a07f4840e6cdd195ba5c6903"} Oct 14 09:15:04 crc kubenswrapper[5058]: I1014 09:15:04.348452 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aaef1f0cc8ca6ecd6b8cc2e580eedc034e7f201a07f4840e6cdd195ba5c6903" Oct 14 09:15:04 crc kubenswrapper[5058]: I1014 09:15:04.348463 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6" Oct 14 09:15:04 crc kubenswrapper[5058]: I1014 09:15:04.394095 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd"] Oct 14 09:15:04 crc kubenswrapper[5058]: I1014 09:15:04.405008 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340510-h65pd"] Oct 14 09:15:04 crc kubenswrapper[5058]: I1014 09:15:04.811656 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2376534f-b530-4fa8-8436-72741d791df5" path="/var/lib/kubelet/pods/2376534f-b530-4fa8-8436-72741d791df5/volumes" Oct 14 09:15:11 crc kubenswrapper[5058]: I1014 09:15:11.040947 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-fm9pt"] Oct 14 09:15:11 crc kubenswrapper[5058]: I1014 09:15:11.061892 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-fm9pt"] Oct 14 09:15:12 crc kubenswrapper[5058]: I1014 09:15:12.806501 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c6fa10-836b-4dd1-bf93-0efd2101ad05" path="/var/lib/kubelet/pods/82c6fa10-836b-4dd1-bf93-0efd2101ad05/volumes" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.529466 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6jjnq"] Oct 14 09:15:19 crc kubenswrapper[5058]: E1014 09:15:19.531363 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ba45f1-d2f5-4b78-8936-aed215eea0ef" containerName="collect-profiles" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.531403 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ba45f1-d2f5-4b78-8936-aed215eea0ef" containerName="collect-profiles" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.531997 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ba45f1-d2f5-4b78-8936-aed215eea0ef" containerName="collect-profiles" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.535597 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.554505 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jjnq"] Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.696414 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-utilities\") pod \"community-operators-6jjnq\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.696954 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-catalog-content\") pod \"community-operators-6jjnq\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.697125 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvsq7\" (UniqueName: \"kubernetes.io/projected/364a6502-0592-42b5-8756-9036150be2df-kube-api-access-bvsq7\") pod \"community-operators-6jjnq\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.799749 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-catalog-content\") pod \"community-operators-6jjnq\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.799883 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvsq7\" (UniqueName: \"kubernetes.io/projected/364a6502-0592-42b5-8756-9036150be2df-kube-api-access-bvsq7\") pod \"community-operators-6jjnq\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.799972 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-utilities\") pod \"community-operators-6jjnq\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.800452 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-utilities\") pod \"community-operators-6jjnq\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.800680 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-catalog-content\") pod \"community-operators-6jjnq\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.822723 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvsq7\" (UniqueName: \"kubernetes.io/projected/364a6502-0592-42b5-8756-9036150be2df-kube-api-access-bvsq7\") pod \"community-operators-6jjnq\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:19 crc kubenswrapper[5058]: I1014 09:15:19.906360 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:20 crc kubenswrapper[5058]: I1014 09:15:20.407165 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jjnq"] Oct 14 09:15:20 crc kubenswrapper[5058]: W1014 09:15:20.413356 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod364a6502_0592_42b5_8756_9036150be2df.slice/crio-961a9d14d64a5ed306f54e2e6f396830062f2e3c8722f10cd6a4f37787af9809 WatchSource:0}: Error finding container 961a9d14d64a5ed306f54e2e6f396830062f2e3c8722f10cd6a4f37787af9809: Status 404 returned error can't find the container with id 961a9d14d64a5ed306f54e2e6f396830062f2e3c8722f10cd6a4f37787af9809 Oct 14 09:15:20 crc kubenswrapper[5058]: I1014 09:15:20.558282 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jjnq" event={"ID":"364a6502-0592-42b5-8756-9036150be2df","Type":"ContainerStarted","Data":"961a9d14d64a5ed306f54e2e6f396830062f2e3c8722f10cd6a4f37787af9809"} Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.309301 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hlpxm"] Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.312157 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.323204 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hlpxm"] Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.436716 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-catalog-content\") pod \"certified-operators-hlpxm\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.437168 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-utilities\") pod \"certified-operators-hlpxm\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.437324 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tm7\" (UniqueName: \"kubernetes.io/projected/6d75d3f0-64c8-473d-b266-e2ebfaefd212-kube-api-access-s5tm7\") pod \"certified-operators-hlpxm\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.539862 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-utilities\") pod \"certified-operators-hlpxm\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.539977 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tm7\" (UniqueName: \"kubernetes.io/projected/6d75d3f0-64c8-473d-b266-e2ebfaefd212-kube-api-access-s5tm7\") pod \"certified-operators-hlpxm\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.540095 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-catalog-content\") pod \"certified-operators-hlpxm\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.541225 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-catalog-content\") pod \"certified-operators-hlpxm\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.541598 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-utilities\") pod \"certified-operators-hlpxm\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.574038 5058 generic.go:334] "Generic (PLEG): container finished" podID="364a6502-0592-42b5-8756-9036150be2df" containerID="094f1668ae6813f24930c204c4a5a739f3982eda15e2d3143b5e396ecd349ea8" exitCode=0 Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.574101 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jjnq" event={"ID":"364a6502-0592-42b5-8756-9036150be2df","Type":"ContainerDied","Data":"094f1668ae6813f24930c204c4a5a739f3982eda15e2d3143b5e396ecd349ea8"} Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.588141 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tm7\" (UniqueName: \"kubernetes.io/projected/6d75d3f0-64c8-473d-b266-e2ebfaefd212-kube-api-access-s5tm7\") pod \"certified-operators-hlpxm\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:21 crc kubenswrapper[5058]: I1014 09:15:21.642593 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:22 crc kubenswrapper[5058]: W1014 09:15:22.207031 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d75d3f0_64c8_473d_b266_e2ebfaefd212.slice/crio-78152ff22142cd009bbdbff3f6aedb44bb96704d903ea3278110324d7dfeb110 WatchSource:0}: Error finding container 78152ff22142cd009bbdbff3f6aedb44bb96704d903ea3278110324d7dfeb110: Status 404 returned error can't find the container with id 78152ff22142cd009bbdbff3f6aedb44bb96704d903ea3278110324d7dfeb110 Oct 14 09:15:22 crc kubenswrapper[5058]: I1014 09:15:22.208787 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hlpxm"] Oct 14 09:15:22 crc kubenswrapper[5058]: I1014 09:15:22.587666 5058 generic.go:334] "Generic (PLEG): container finished" podID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerID="89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c" exitCode=0 Oct 14 09:15:22 crc kubenswrapper[5058]: I1014 09:15:22.587718 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlpxm" event={"ID":"6d75d3f0-64c8-473d-b266-e2ebfaefd212","Type":"ContainerDied","Data":"89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c"} Oct 14 09:15:22 crc kubenswrapper[5058]: I1014 09:15:22.588001 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlpxm" event={"ID":"6d75d3f0-64c8-473d-b266-e2ebfaefd212","Type":"ContainerStarted","Data":"78152ff22142cd009bbdbff3f6aedb44bb96704d903ea3278110324d7dfeb110"} Oct 14 09:15:22 crc kubenswrapper[5058]: I1014 09:15:22.592470 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jjnq" event={"ID":"364a6502-0592-42b5-8756-9036150be2df","Type":"ContainerStarted","Data":"fe3e201715fb6417d79896120ec3b064eed057574d093ee289b3fb9ba7862cee"} Oct 14 09:15:25 crc kubenswrapper[5058]: I1014 09:15:25.638600 5058 generic.go:334] "Generic (PLEG): container finished" podID="364a6502-0592-42b5-8756-9036150be2df" containerID="fe3e201715fb6417d79896120ec3b064eed057574d093ee289b3fb9ba7862cee" exitCode=0 Oct 14 09:15:25 crc kubenswrapper[5058]: I1014 09:15:25.638731 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jjnq" event={"ID":"364a6502-0592-42b5-8756-9036150be2df","Type":"ContainerDied","Data":"fe3e201715fb6417d79896120ec3b064eed057574d093ee289b3fb9ba7862cee"} Oct 14 09:15:25 crc kubenswrapper[5058]: I1014 09:15:25.642709 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlpxm" event={"ID":"6d75d3f0-64c8-473d-b266-e2ebfaefd212","Type":"ContainerStarted","Data":"76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1"} Oct 14 09:15:26 crc kubenswrapper[5058]: I1014 09:15:26.658632 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jjnq" event={"ID":"364a6502-0592-42b5-8756-9036150be2df","Type":"ContainerStarted","Data":"b697cb8ab58f2904190d487add1505e83c5bc9e8f65aa44b410424d52ca04817"} Oct 14 09:15:26 crc kubenswrapper[5058]: I1014 09:15:26.662061 5058 generic.go:334] "Generic (PLEG): container finished" podID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerID="76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1" exitCode=0 Oct 14 09:15:26 crc kubenswrapper[5058]: I1014 09:15:26.662126 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlpxm" event={"ID":"6d75d3f0-64c8-473d-b266-e2ebfaefd212","Type":"ContainerDied","Data":"76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1"} Oct 14 09:15:26 crc kubenswrapper[5058]: I1014 09:15:26.705397 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6jjnq" podStartSLOduration=2.966767028 podStartE2EDuration="7.705371809s" podCreationTimestamp="2025-10-14 09:15:19 +0000 UTC" firstStartedPulling="2025-10-14 09:15:21.57741164 +0000 UTC m=+8869.488495446" lastFinishedPulling="2025-10-14 09:15:26.316016421 +0000 UTC m=+8874.227100227" observedRunningTime="2025-10-14 09:15:26.68263297 +0000 UTC m=+8874.593716846" watchObservedRunningTime="2025-10-14 09:15:26.705371809 +0000 UTC m=+8874.616455635" Oct 14 09:15:27 crc kubenswrapper[5058]: I1014 09:15:27.677651 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlpxm" event={"ID":"6d75d3f0-64c8-473d-b266-e2ebfaefd212","Type":"ContainerStarted","Data":"95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6"} Oct 14 09:15:27 crc kubenswrapper[5058]: I1014 09:15:27.697555 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hlpxm" podStartSLOduration=2.096131688 podStartE2EDuration="6.697541038s" podCreationTimestamp="2025-10-14 09:15:21 +0000 UTC" firstStartedPulling="2025-10-14 09:15:22.590191107 +0000 UTC m=+8870.501274933" lastFinishedPulling="2025-10-14 09:15:27.191600477 +0000 UTC m=+8875.102684283" observedRunningTime="2025-10-14 09:15:27.693286097 +0000 UTC m=+8875.604369913" watchObservedRunningTime="2025-10-14 09:15:27.697541038 +0000 UTC m=+8875.608624844" Oct 14 09:15:29 crc kubenswrapper[5058]: I1014 09:15:29.906861 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:29 crc kubenswrapper[5058]: I1014 09:15:29.907281 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:30 crc kubenswrapper[5058]: I1014 09:15:30.957686 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6jjnq" podUID="364a6502-0592-42b5-8756-9036150be2df" containerName="registry-server" probeResult="failure" output=< Oct 14 09:15:30 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 09:15:30 crc kubenswrapper[5058]: > Oct 14 09:15:31 crc kubenswrapper[5058]: I1014 09:15:31.643443 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:31 crc kubenswrapper[5058]: I1014 09:15:31.643502 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:32 crc kubenswrapper[5058]: I1014 09:15:32.727877 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-hlpxm" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerName="registry-server" probeResult="failure" output=< Oct 14 09:15:32 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 09:15:32 crc kubenswrapper[5058]: > Oct 14 09:15:40 crc kubenswrapper[5058]: I1014 09:15:40.073676 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:40 crc kubenswrapper[5058]: I1014 09:15:40.175125 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:40 crc kubenswrapper[5058]: I1014 09:15:40.330218 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jjnq"] Oct 14 09:15:41 crc kubenswrapper[5058]: I1014 09:15:41.716819 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:41 crc kubenswrapper[5058]: I1014 09:15:41.806213 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:41 crc kubenswrapper[5058]: I1014 09:15:41.868031 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6jjnq" podUID="364a6502-0592-42b5-8756-9036150be2df" containerName="registry-server" containerID="cri-o://b697cb8ab58f2904190d487add1505e83c5bc9e8f65aa44b410424d52ca04817" gracePeriod=2 Oct 14 09:15:42 crc kubenswrapper[5058]: I1014 09:15:42.733044 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hlpxm"] Oct 14 09:15:42 crc kubenswrapper[5058]: I1014 09:15:42.882571 5058 generic.go:334] "Generic (PLEG): container finished" podID="364a6502-0592-42b5-8756-9036150be2df" containerID="b697cb8ab58f2904190d487add1505e83c5bc9e8f65aa44b410424d52ca04817" exitCode=0 Oct 14 09:15:42 crc kubenswrapper[5058]: I1014 09:15:42.882645 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jjnq" event={"ID":"364a6502-0592-42b5-8756-9036150be2df","Type":"ContainerDied","Data":"b697cb8ab58f2904190d487add1505e83c5bc9e8f65aa44b410424d52ca04817"} Oct 14 09:15:42 crc kubenswrapper[5058]: I1014 09:15:42.882952 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jjnq" event={"ID":"364a6502-0592-42b5-8756-9036150be2df","Type":"ContainerDied","Data":"961a9d14d64a5ed306f54e2e6f396830062f2e3c8722f10cd6a4f37787af9809"} Oct 14 09:15:42 crc kubenswrapper[5058]: I1014 09:15:42.882969 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="961a9d14d64a5ed306f54e2e6f396830062f2e3c8722f10cd6a4f37787af9809" Oct 14 09:15:42 crc kubenswrapper[5058]: I1014 09:15:42.883124 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hlpxm" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerName="registry-server" containerID="cri-o://95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6" gracePeriod=2 Oct 14 09:15:42 crc kubenswrapper[5058]: I1014 09:15:42.904606 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.080058 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-catalog-content\") pod \"364a6502-0592-42b5-8756-9036150be2df\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.080170 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-utilities\") pod \"364a6502-0592-42b5-8756-9036150be2df\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.080248 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvsq7\" (UniqueName: \"kubernetes.io/projected/364a6502-0592-42b5-8756-9036150be2df-kube-api-access-bvsq7\") pod \"364a6502-0592-42b5-8756-9036150be2df\" (UID: \"364a6502-0592-42b5-8756-9036150be2df\") " Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.080977 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-utilities" (OuterVolumeSpecName: "utilities") pod "364a6502-0592-42b5-8756-9036150be2df" (UID: "364a6502-0592-42b5-8756-9036150be2df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.086328 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364a6502-0592-42b5-8756-9036150be2df-kube-api-access-bvsq7" (OuterVolumeSpecName: "kube-api-access-bvsq7") pod "364a6502-0592-42b5-8756-9036150be2df" (UID: "364a6502-0592-42b5-8756-9036150be2df"). InnerVolumeSpecName "kube-api-access-bvsq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.137205 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "364a6502-0592-42b5-8756-9036150be2df" (UID: "364a6502-0592-42b5-8756-9036150be2df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.183065 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.183193 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a6502-0592-42b5-8756-9036150be2df-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.183206 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvsq7\" (UniqueName: \"kubernetes.io/projected/364a6502-0592-42b5-8756-9036150be2df-kube-api-access-bvsq7\") on node \"crc\" DevicePath \"\"" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.380871 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.491597 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5tm7\" (UniqueName: \"kubernetes.io/projected/6d75d3f0-64c8-473d-b266-e2ebfaefd212-kube-api-access-s5tm7\") pod \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.492631 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-catalog-content\") pod \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.494610 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-utilities" (OuterVolumeSpecName: "utilities") pod "6d75d3f0-64c8-473d-b266-e2ebfaefd212" (UID: "6d75d3f0-64c8-473d-b266-e2ebfaefd212"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.495159 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d75d3f0-64c8-473d-b266-e2ebfaefd212-kube-api-access-s5tm7" (OuterVolumeSpecName: "kube-api-access-s5tm7") pod "6d75d3f0-64c8-473d-b266-e2ebfaefd212" (UID: "6d75d3f0-64c8-473d-b266-e2ebfaefd212"). InnerVolumeSpecName "kube-api-access-s5tm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.495577 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-utilities\") pod \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\" (UID: \"6d75d3f0-64c8-473d-b266-e2ebfaefd212\") " Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.497090 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5tm7\" (UniqueName: \"kubernetes.io/projected/6d75d3f0-64c8-473d-b266-e2ebfaefd212-kube-api-access-s5tm7\") on node \"crc\" DevicePath \"\"" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.497304 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.566415 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d75d3f0-64c8-473d-b266-e2ebfaefd212" (UID: "6d75d3f0-64c8-473d-b266-e2ebfaefd212"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.599734 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d75d3f0-64c8-473d-b266-e2ebfaefd212-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.894628 5058 generic.go:334] "Generic (PLEG): container finished" podID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerID="95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6" exitCode=0 Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.894698 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlpxm" event={"ID":"6d75d3f0-64c8-473d-b266-e2ebfaefd212","Type":"ContainerDied","Data":"95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6"} Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.894716 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jjnq" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.894737 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlpxm" event={"ID":"6d75d3f0-64c8-473d-b266-e2ebfaefd212","Type":"ContainerDied","Data":"78152ff22142cd009bbdbff3f6aedb44bb96704d903ea3278110324d7dfeb110"} Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.894755 5058 scope.go:117] "RemoveContainer" containerID="95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.894745 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hlpxm" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.953534 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jjnq"] Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.957454 5058 scope.go:117] "RemoveContainer" containerID="76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1" Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.964329 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6jjnq"] Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.977635 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hlpxm"] Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.985888 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hlpxm"] Oct 14 09:15:43 crc kubenswrapper[5058]: I1014 09:15:43.992591 5058 scope.go:117] "RemoveContainer" containerID="89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c" Oct 14 09:15:44 crc kubenswrapper[5058]: I1014 09:15:44.025608 5058 scope.go:117] "RemoveContainer" containerID="95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6" Oct 14 09:15:44 crc kubenswrapper[5058]: E1014 09:15:44.026216 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6\": container with ID starting with 95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6 not found: ID does not exist" containerID="95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6" Oct 14 09:15:44 crc kubenswrapper[5058]: I1014 09:15:44.026272 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6"} err="failed to get container status \"95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6\": rpc error: code = NotFound desc = could not find container \"95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6\": container with ID starting with 95cef5e9e80821661e944bbc0ee25f3fca5d3ed2d98ee5a2eceff2a29619c3a6 not found: ID does not exist" Oct 14 09:15:44 crc kubenswrapper[5058]: I1014 09:15:44.026351 5058 scope.go:117] "RemoveContainer" containerID="76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1" Oct 14 09:15:44 crc kubenswrapper[5058]: E1014 09:15:44.026838 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1\": container with ID starting with 76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1 not found: ID does not exist" containerID="76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1" Oct 14 09:15:44 crc kubenswrapper[5058]: I1014 09:15:44.026878 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1"} err="failed to get container status \"76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1\": rpc error: code = NotFound desc = could not find container \"76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1\": container with ID starting with 76f5857428561612fe45b6423490ab07e5c2f91d6df9723e57e5b4676653edc1 not found: ID does not exist" Oct 14 09:15:44 crc kubenswrapper[5058]: I1014 09:15:44.026903 5058 scope.go:117] "RemoveContainer" containerID="89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c" Oct 14 09:15:44 crc kubenswrapper[5058]: E1014 09:15:44.027244 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c\": container with ID starting with 89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c not found: ID does not exist" containerID="89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c" Oct 14 09:15:44 crc kubenswrapper[5058]: I1014 09:15:44.027275 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c"} err="failed to get container status \"89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c\": rpc error: code = NotFound desc = could not find container \"89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c\": container with ID starting with 89773c3be4af44f7cafab3bcf4a189b49c3cf255db1800b4aa2cbf636ed08f0c not found: ID does not exist" Oct 14 09:15:44 crc kubenswrapper[5058]: I1014 09:15:44.828893 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364a6502-0592-42b5-8756-9036150be2df" path="/var/lib/kubelet/pods/364a6502-0592-42b5-8756-9036150be2df/volumes" Oct 14 09:15:44 crc kubenswrapper[5058]: I1014 09:15:44.836504 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" path="/var/lib/kubelet/pods/6d75d3f0-64c8-473d-b266-e2ebfaefd212/volumes" Oct 14 09:15:45 crc kubenswrapper[5058]: I1014 09:15:45.584704 5058 scope.go:117] "RemoveContainer" containerID="18b7726ea8ebc30436362674b35bd5d67f1f1bfd54e722d077c9b59928a76802" Oct 14 09:15:45 crc kubenswrapper[5058]: I1014 09:15:45.619756 5058 scope.go:117] "RemoveContainer" containerID="59ca14242bbc69f17c933e8f9c5d2561578fac5cef66c06c35422cbd60b0944c" Oct 14 09:15:45 crc kubenswrapper[5058]: I1014 09:15:45.703535 5058 scope.go:117] "RemoveContainer" containerID="f3f4c9851ea8a268404411371b10a44a7fdd8663537e6091e5ae2a11f23cb01e" Oct 14 09:15:45 crc kubenswrapper[5058]: I1014 09:15:45.757360 5058 scope.go:117] "RemoveContainer" containerID="f78142137be3dd05b8966b9327f89a2758cf54b7d246b06987216eebc1c23e4a" Oct 14 09:16:14 crc kubenswrapper[5058]: I1014 09:16:14.057314 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-st5rv"] Oct 14 09:16:14 crc kubenswrapper[5058]: I1014 09:16:14.075984 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-st5rv"] Oct 14 09:16:14 crc kubenswrapper[5058]: I1014 09:16:14.810329 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b5ed5c-0a6b-41d8-b24c-3916aff7970e" path="/var/lib/kubelet/pods/58b5ed5c-0a6b-41d8-b24c-3916aff7970e/volumes" Oct 14 09:16:15 crc kubenswrapper[5058]: I1014 09:16:15.040088 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hswx5"] Oct 14 09:16:15 crc kubenswrapper[5058]: I1014 09:16:15.061146 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hswx5"] Oct 14 09:16:15 crc kubenswrapper[5058]: I1014 09:16:15.072155 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell3-db-create-98fgt"] Oct 14 09:16:15 crc kubenswrapper[5058]: I1014 09:16:15.084078 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell3-db-create-98fgt"] Oct 14 09:16:15 crc kubenswrapper[5058]: I1014 09:16:15.094864 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell2-db-create-595tr"] Oct 14 09:16:15 crc kubenswrapper[5058]: I1014 09:16:15.103911 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-r9nqc"] Oct 14 09:16:15 crc kubenswrapper[5058]: I1014 09:16:15.110867 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell2-db-create-595tr"] Oct 14 09:16:15 crc kubenswrapper[5058]: I1014 09:16:15.117788 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-r9nqc"] Oct 14 09:16:16 crc kubenswrapper[5058]: I1014 09:16:16.813990 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446b3620-cf2b-4b00-b483-a7cf6f51a279" path="/var/lib/kubelet/pods/446b3620-cf2b-4b00-b483-a7cf6f51a279/volumes" Oct 14 09:16:16 crc kubenswrapper[5058]: I1014 09:16:16.815771 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f335715-e559-42b5-8609-6c9493ed63d1" path="/var/lib/kubelet/pods/5f335715-e559-42b5-8609-6c9493ed63d1/volumes" Oct 14 09:16:16 crc kubenswrapper[5058]: I1014 09:16:16.816930 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed56fa1-6905-4f81-94f4-68d73e91170d" path="/var/lib/kubelet/pods/9ed56fa1-6905-4f81-94f4-68d73e91170d/volumes" Oct 14 09:16:16 crc kubenswrapper[5058]: I1014 09:16:16.818103 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da88b0aa-ae98-4648-9cfa-f93f1b6030aa" path="/var/lib/kubelet/pods/da88b0aa-ae98-4648-9cfa-f93f1b6030aa/volumes" Oct 14 09:16:25 crc kubenswrapper[5058]: I1014 09:16:25.063120 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c786-account-create-lbwbz"] Oct 14 09:16:25 crc kubenswrapper[5058]: I1014 09:16:25.079349 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-441c-account-create-r8sls"] Oct 14 09:16:25 crc kubenswrapper[5058]: I1014 09:16:25.092289 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3e4f-account-create-rm4rn"] Oct 14 09:16:25 crc kubenswrapper[5058]: I1014 09:16:25.105028 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-441c-account-create-r8sls"] Oct 14 09:16:25 crc kubenswrapper[5058]: I1014 09:16:25.115438 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3e4f-account-create-rm4rn"] Oct 14 09:16:25 crc kubenswrapper[5058]: I1014 09:16:25.123930 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c786-account-create-lbwbz"] Oct 14 09:16:26 crc kubenswrapper[5058]: I1014 09:16:26.051822 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell2-2491-account-create-w85vz"] Oct 14 09:16:26 crc kubenswrapper[5058]: I1014 09:16:26.066199 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell3-2a5a-account-create-rfgzt"] Oct 14 09:16:26 crc kubenswrapper[5058]: I1014 09:16:26.079581 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell2-2491-account-create-w85vz"] Oct 14 09:16:26 crc kubenswrapper[5058]: I1014 09:16:26.091269 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell3-2a5a-account-create-rfgzt"] Oct 14 09:16:26 crc kubenswrapper[5058]: I1014 09:16:26.834438 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80876351-2288-4166-828c-a950d07d9110" path="/var/lib/kubelet/pods/80876351-2288-4166-828c-a950d07d9110/volumes" Oct 14 09:16:26 crc kubenswrapper[5058]: I1014 09:16:26.835190 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973bf351-be16-427c-97f0-058be4442fbb" path="/var/lib/kubelet/pods/973bf351-be16-427c-97f0-058be4442fbb/volumes" Oct 14 09:16:26 crc kubenswrapper[5058]: I1014 09:16:26.836006 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1043be2-dee2-4a1e-a0e9-69faae8528a8" path="/var/lib/kubelet/pods/d1043be2-dee2-4a1e-a0e9-69faae8528a8/volumes" Oct 14 09:16:26 crc kubenswrapper[5058]: I1014 09:16:26.836940 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb62eadc-7258-4ae7-b4c7-1532222f9626" path="/var/lib/kubelet/pods/eb62eadc-7258-4ae7-b4c7-1532222f9626/volumes" Oct 14 09:16:26 crc kubenswrapper[5058]: I1014 09:16:26.838494 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa186350-f3ef-4dc8-a639-f416520c45c7" path="/var/lib/kubelet/pods/fa186350-f3ef-4dc8-a639-f416520c45c7/volumes" Oct 14 09:16:44 crc kubenswrapper[5058]: I1014 09:16:44.047456 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2sg4"] Oct 14 09:16:44 crc kubenswrapper[5058]: I1014 09:16:44.061586 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2sg4"] Oct 14 09:16:44 crc kubenswrapper[5058]: I1014 09:16:44.815429 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="634840fa-f062-41bf-95e9-6e6171820f13" path="/var/lib/kubelet/pods/634840fa-f062-41bf-95e9-6e6171820f13/volumes" Oct 14 09:16:45 crc kubenswrapper[5058]: I1014 09:16:45.906344 5058 scope.go:117] "RemoveContainer" containerID="69ce234c4cdcf626558316e24ab8e7c7f0d7a97c54f15e052a703c178d4986cd" Oct 14 09:16:45 crc kubenswrapper[5058]: I1014 09:16:45.946380 5058 scope.go:117] "RemoveContainer" containerID="9aa0ef363939c6cd9a16b3c400615dded93d7ad5300c2d5bb1d63c6ba5e85e1d" Oct 14 09:16:46 crc kubenswrapper[5058]: I1014 09:16:46.000243 5058 scope.go:117] "RemoveContainer" containerID="37343111f9a912598fd3f4fcc12b6a56d84e20cfb904aa8432406a9766f03172" Oct 14 09:16:46 crc kubenswrapper[5058]: I1014 09:16:46.081346 5058 scope.go:117] "RemoveContainer" containerID="366fb880f272a4ff23e8f86a80cad635d74d6f29230260429916eca1b93004a9" Oct 14 09:16:46 crc kubenswrapper[5058]: I1014 09:16:46.131992 5058 scope.go:117] "RemoveContainer" containerID="718a4c0b1affa687d939e9375caec7a99bcc36c4f3e7fc4b72489cc2ef180ec7" Oct 14 09:16:46 crc kubenswrapper[5058]: I1014 09:16:46.179281 5058 scope.go:117] "RemoveContainer" containerID="660d83f2c8d270d37baafa5f948b6a276118864f9d24dc25aa05bc1d52a15718" Oct 14 09:16:46 crc kubenswrapper[5058]: I1014 09:16:46.228771 5058 scope.go:117] "RemoveContainer" containerID="7e2b7ca119fc366ee1c655efb09e0b864bab8938c189195edbdb764468195ab5" Oct 14 09:16:46 crc kubenswrapper[5058]: I1014 09:16:46.255657 5058 scope.go:117] "RemoveContainer" containerID="49b5f0572d513ec539e2fb4f76017d4d0f405ef94f90169d3a1746e9f4b21893" Oct 14 09:16:46 crc kubenswrapper[5058]: I1014 09:16:46.290614 5058 scope.go:117] "RemoveContainer" containerID="e2f5f1cde1ba0d7dd8b1fb7a13f1a0f22df0a6f9126b2a47ce0d667d9a2f15d2" Oct 14 09:16:46 crc kubenswrapper[5058]: I1014 09:16:46.319448 5058 scope.go:117] "RemoveContainer" containerID="935b2eeb57a450fe5e873c969b54e4cc5e2d0e5110234740359d692d8267b9d5" Oct 14 09:16:46 crc kubenswrapper[5058]: I1014 09:16:46.352076 5058 scope.go:117] "RemoveContainer" containerID="e4a5b29b94a032d4debeb8b99ee28cf79c6557d64aedc7bad22f04b37eeeca19" Oct 14 09:17:00 crc kubenswrapper[5058]: I1014 09:17:00.061206 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-h584n"] Oct 14 09:17:00 crc kubenswrapper[5058]: I1014 09:17:00.075620 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-h584n"] Oct 14 09:17:00 crc kubenswrapper[5058]: I1014 09:17:00.085895 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell3-conductor-db-sync-rmkxf"] Oct 14 09:17:00 crc kubenswrapper[5058]: I1014 09:17:00.096739 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell3-conductor-db-sync-rmkxf"] Oct 14 09:17:00 crc kubenswrapper[5058]: I1014 09:17:00.815682 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfebb4c9-3b48-4463-8e95-d6b68646ba51" path="/var/lib/kubelet/pods/dfebb4c9-3b48-4463-8e95-d6b68646ba51/volumes" Oct 14 09:17:00 crc kubenswrapper[5058]: I1014 09:17:00.817466 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed21e369-4fc4-44d6-a38a-1740d99705bf" path="/var/lib/kubelet/pods/ed21e369-4fc4-44d6-a38a-1740d99705bf/volumes" Oct 14 09:17:01 crc kubenswrapper[5058]: I1014 09:17:01.042733 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell2-conductor-db-sync-7hvm2"] Oct 14 09:17:01 crc kubenswrapper[5058]: I1014 09:17:01.057246 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-55r6w"] Oct 14 09:17:01 crc kubenswrapper[5058]: I1014 09:17:01.068296 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell2-conductor-db-sync-7hvm2"] Oct 14 09:17:01 crc kubenswrapper[5058]: I1014 09:17:01.080693 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-55r6w"] Oct 14 09:17:02 crc kubenswrapper[5058]: I1014 09:17:02.810903 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa59b2f-7cff-4d52-826e-eda45ae51a10" path="/var/lib/kubelet/pods/0fa59b2f-7cff-4d52-826e-eda45ae51a10/volumes" Oct 14 09:17:02 crc kubenswrapper[5058]: I1014 09:17:02.813183 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8abb55-36b8-4dfe-8594-22d8bc9a15e7" path="/var/lib/kubelet/pods/ab8abb55-36b8-4dfe-8594-22d8bc9a15e7/volumes" Oct 14 09:17:03 crc kubenswrapper[5058]: I1014 09:17:03.656516 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:17:03 crc kubenswrapper[5058]: I1014 09:17:03.656612 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:17:20 crc kubenswrapper[5058]: I1014 09:17:20.035853 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell3-cell-mapping-5cg8j"] Oct 14 09:17:20 crc kubenswrapper[5058]: I1014 09:17:20.047175 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell3-cell-mapping-5cg8j"] Oct 14 09:17:20 crc kubenswrapper[5058]: I1014 09:17:20.811480 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5b7921-0fe3-4434-b195-269571dcb814" path="/var/lib/kubelet/pods/db5b7921-0fe3-4434-b195-269571dcb814/volumes" Oct 14 09:17:21 crc kubenswrapper[5058]: I1014 09:17:21.056843 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8zv"] Oct 14 09:17:21 crc kubenswrapper[5058]: I1014 09:17:21.065326 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qn8zv"] Oct 14 09:17:21 crc kubenswrapper[5058]: I1014 09:17:21.074507 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell2-cell-mapping-prkt6"] Oct 14 09:17:21 crc kubenswrapper[5058]: I1014 09:17:21.083162 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell2-cell-mapping-prkt6"] Oct 14 09:17:22 crc kubenswrapper[5058]: I1014 09:17:22.805613 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9269bf36-7053-4374-8049-ead5e973b407" path="/var/lib/kubelet/pods/9269bf36-7053-4374-8049-ead5e973b407/volumes" Oct 14 09:17:22 crc kubenswrapper[5058]: I1014 09:17:22.807302 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a624d348-510d-461d-b705-f8eb754d02ba" path="/var/lib/kubelet/pods/a624d348-510d-461d-b705-f8eb754d02ba/volumes" Oct 14 09:17:33 crc kubenswrapper[5058]: I1014 09:17:33.656159 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:17:33 crc kubenswrapper[5058]: I1014 09:17:33.656730 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:17:45 crc kubenswrapper[5058]: I1014 09:17:45.510304 5058 generic.go:334] "Generic (PLEG): container finished" podID="f0c0452c-9556-4225-89fd-d5a07ad5fbfa" containerID="88cfa36baf726109b9b8f08cc30497e3dc02b4b29608a8c9e5ad166e136c60da" exitCode=0 Oct 14 09:17:45 crc kubenswrapper[5058]: I1014 09:17:45.510406 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" event={"ID":"f0c0452c-9556-4225-89fd-d5a07ad5fbfa","Type":"ContainerDied","Data":"88cfa36baf726109b9b8f08cc30497e3dc02b4b29608a8c9e5ad166e136c60da"} Oct 14 09:17:46 crc kubenswrapper[5058]: I1014 09:17:46.594906 5058 scope.go:117] "RemoveContainer" containerID="32346549baf6e71472566ac65c6453ed6ed142ddcdb6402a2a7a453da6ae9546" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.216619 5058 scope.go:117] "RemoveContainer" containerID="c0ae6a2c000919076b9931d9f0747226e73fe7ed486da06a47ca43204b52b59f" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.421772 5058 scope.go:117] "RemoveContainer" containerID="57666bdc8c09cae7f4312dbe39fe4c843e95425ec81f67fa14dc2ad18296c122" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.460062 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.513309 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-tripleo-cleanup-combined-ca-bundle\") pod \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.513376 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-inventory\") pod \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.513442 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-ssh-key\") pod \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.513503 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf2bf\" (UniqueName: \"kubernetes.io/projected/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-kube-api-access-mf2bf\") pod \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\" (UID: \"f0c0452c-9556-4225-89fd-d5a07ad5fbfa\") " Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.518235 5058 scope.go:117] "RemoveContainer" containerID="ed03a4dd57b03ecb6e34a44b6723c62a4e63a6750466b0f409342084f9b282d6" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.519610 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-kube-api-access-mf2bf" (OuterVolumeSpecName: "kube-api-access-mf2bf") pod "f0c0452c-9556-4225-89fd-d5a07ad5fbfa" (UID: "f0c0452c-9556-4225-89fd-d5a07ad5fbfa"). InnerVolumeSpecName "kube-api-access-mf2bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.521777 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "f0c0452c-9556-4225-89fd-d5a07ad5fbfa" (UID: "f0c0452c-9556-4225-89fd-d5a07ad5fbfa"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.554509 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-inventory" (OuterVolumeSpecName: "inventory") pod "f0c0452c-9556-4225-89fd-d5a07ad5fbfa" (UID: "f0c0452c-9556-4225-89fd-d5a07ad5fbfa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.559363 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" event={"ID":"f0c0452c-9556-4225-89fd-d5a07ad5fbfa","Type":"ContainerDied","Data":"b2106a9cbe838791367fae181cf88e8cac9fcbf041bbbecee46fb75efa860b1f"} Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.559404 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2106a9cbe838791367fae181cf88e8cac9fcbf041bbbecee46fb75efa860b1f" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.559458 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.569650 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f0c0452c-9556-4225-89fd-d5a07ad5fbfa" (UID: "f0c0452c-9556-4225-89fd-d5a07ad5fbfa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.571464 5058 scope.go:117] "RemoveContainer" containerID="6103137bfa1c07719266ed40132a241f1f122352577df59ec1353520b3bfa564" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.615312 5058 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.615339 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.615350 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.615360 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf2bf\" (UniqueName: \"kubernetes.io/projected/f0c0452c-9556-4225-89fd-d5a07ad5fbfa-kube-api-access-mf2bf\") on node \"crc\" DevicePath \"\"" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.622042 5058 scope.go:117] "RemoveContainer" containerID="7ebbc5ba8183ae77a7a1f2b7b4f9743ca1334d9057a56b761032a5ee5fb856eb" Oct 14 09:17:47 crc kubenswrapper[5058]: I1014 09:17:47.660178 5058 scope.go:117] "RemoveContainer" containerID="6ea0c5e0ab4bfd6eb35787bddd607712b68538cc9fade45b0ba5f50827d3c3c0" Oct 14 09:18:03 crc kubenswrapper[5058]: I1014 09:18:03.655649 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:18:03 crc kubenswrapper[5058]: I1014 09:18:03.656234 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:18:03 crc kubenswrapper[5058]: I1014 09:18:03.656279 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:18:03 crc kubenswrapper[5058]: I1014 09:18:03.657217 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd0cc303249430ec985841e3dee26e8a2bbfcb68a99cfd1c2d52543cd0fb8353"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:18:03 crc kubenswrapper[5058]: I1014 09:18:03.657302 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://cd0cc303249430ec985841e3dee26e8a2bbfcb68a99cfd1c2d52543cd0fb8353" gracePeriod=600 Oct 14 09:18:03 crc kubenswrapper[5058]: E1014 09:18:03.830165 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64184db4_5b6d_4aa8_b780_c9f6163af3d8.slice/crio-cd0cc303249430ec985841e3dee26e8a2bbfcb68a99cfd1c2d52543cd0fb8353.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64184db4_5b6d_4aa8_b780_c9f6163af3d8.slice/crio-conmon-cd0cc303249430ec985841e3dee26e8a2bbfcb68a99cfd1c2d52543cd0fb8353.scope\": RecentStats: unable to find data in memory cache]" Oct 14 09:18:04 crc kubenswrapper[5058]: I1014 09:18:04.781409 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="cd0cc303249430ec985841e3dee26e8a2bbfcb68a99cfd1c2d52543cd0fb8353" exitCode=0 Oct 14 09:18:04 crc kubenswrapper[5058]: I1014 09:18:04.781471 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"cd0cc303249430ec985841e3dee26e8a2bbfcb68a99cfd1c2d52543cd0fb8353"} Oct 14 09:18:04 crc kubenswrapper[5058]: I1014 09:18:04.781545 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df"} Oct 14 09:18:04 crc kubenswrapper[5058]: I1014 09:18:04.781575 5058 scope.go:117] "RemoveContainer" containerID="f65ee1ccffefcf85b0ddce91740009f0d1c07392d902974a019288b4acc6aa74" Oct 14 09:19:29 crc kubenswrapper[5058]: I1014 09:19:29.060059 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-wq4qg"] Oct 14 09:19:29 crc kubenswrapper[5058]: I1014 09:19:29.077925 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-wq4qg"] Oct 14 09:19:30 crc kubenswrapper[5058]: I1014 09:19:30.807720 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6cee57d-31cb-4d16-8424-f0c19da4ea13" path="/var/lib/kubelet/pods/a6cee57d-31cb-4d16-8424-f0c19da4ea13/volumes" Oct 14 09:19:40 crc kubenswrapper[5058]: I1014 09:19:40.049710 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-98ec-account-create-qs55p"] Oct 14 09:19:40 crc kubenswrapper[5058]: I1014 09:19:40.067583 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-98ec-account-create-qs55p"] Oct 14 09:19:40 crc kubenswrapper[5058]: I1014 09:19:40.805395 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828a2275-8a5a-4686-893f-acd3c9dbc003" path="/var/lib/kubelet/pods/828a2275-8a5a-4686-893f-acd3c9dbc003/volumes" Oct 14 09:19:47 crc kubenswrapper[5058]: I1014 09:19:47.869314 5058 scope.go:117] "RemoveContainer" containerID="8368099f4b3e722b1138cae5b456baa1580e75cccdabb76dc4d144864b1894b5" Oct 14 09:19:47 crc kubenswrapper[5058]: I1014 09:19:47.898656 5058 scope.go:117] "RemoveContainer" containerID="e5fad26eb986a98a5417b9501dba4ea40f52076a394a23240512fb8d0ba56ece" Oct 14 09:19:47 crc kubenswrapper[5058]: I1014 09:19:47.969450 5058 scope.go:117] "RemoveContainer" containerID="c8700ba8770048af3795cb56a70369552aaf61b9414926fa8cabd042bb665740" Oct 14 09:19:48 crc kubenswrapper[5058]: I1014 09:19:48.034481 5058 scope.go:117] "RemoveContainer" containerID="45677642ac2bcbe25476c78502be8d9f71418aced6113b2b8143412788159b74" Oct 14 09:19:53 crc kubenswrapper[5058]: I1014 09:19:53.052088 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-vr6kc"] Oct 14 09:19:53 crc kubenswrapper[5058]: I1014 09:19:53.069061 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-vr6kc"] Oct 14 09:19:54 crc kubenswrapper[5058]: I1014 09:19:54.810705 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216627be-67d1-4945-930b-6d1062f84697" path="/var/lib/kubelet/pods/216627be-67d1-4945-930b-6d1062f84697/volumes" Oct 14 09:20:33 crc kubenswrapper[5058]: I1014 09:20:33.656232 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:20:33 crc kubenswrapper[5058]: I1014 09:20:33.656993 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:20:48 crc kubenswrapper[5058]: I1014 09:20:48.164350 5058 scope.go:117] "RemoveContainer" containerID="d776d486e822ca3deeeaea04d6775e6228f31daac408513050ca6c56dc31ede5" Oct 14 09:21:03 crc kubenswrapper[5058]: I1014 09:21:03.656769 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:21:03 crc kubenswrapper[5058]: I1014 09:21:03.657686 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:21:03 crc kubenswrapper[5058]: I1014 09:21:03.979705 5058 generic.go:334] "Generic (PLEG): container finished" podID="4e159776-52a9-419f-936f-4a921173dd04" containerID="120b297f695233b2fd4d9064092e430778225745ad9dc4ce0fc27e3d797d47a1" exitCode=0 Oct 14 09:21:03 crc kubenswrapper[5058]: I1014 09:21:03.979767 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" event={"ID":"4e159776-52a9-419f-936f-4a921173dd04","Type":"ContainerDied","Data":"120b297f695233b2fd4d9064092e430778225745ad9dc4ce0fc27e3d797d47a1"} Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.539344 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.553229 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-inventory\") pod \"4e159776-52a9-419f-936f-4a921173dd04\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.553502 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-tripleo-cleanup-combined-ca-bundle\") pod \"4e159776-52a9-419f-936f-4a921173dd04\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.553690 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wm5f\" (UniqueName: \"kubernetes.io/projected/4e159776-52a9-419f-936f-4a921173dd04-kube-api-access-7wm5f\") pod \"4e159776-52a9-419f-936f-4a921173dd04\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.553746 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-ssh-key\") pod \"4e159776-52a9-419f-936f-4a921173dd04\" (UID: \"4e159776-52a9-419f-936f-4a921173dd04\") " Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.568320 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e159776-52a9-419f-936f-4a921173dd04-kube-api-access-7wm5f" (OuterVolumeSpecName: "kube-api-access-7wm5f") pod "4e159776-52a9-419f-936f-4a921173dd04" (UID: "4e159776-52a9-419f-936f-4a921173dd04"). InnerVolumeSpecName "kube-api-access-7wm5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.572085 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "4e159776-52a9-419f-936f-4a921173dd04" (UID: "4e159776-52a9-419f-936f-4a921173dd04"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.599735 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4e159776-52a9-419f-936f-4a921173dd04" (UID: "4e159776-52a9-419f-936f-4a921173dd04"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.618746 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-inventory" (OuterVolumeSpecName: "inventory") pod "4e159776-52a9-419f-936f-4a921173dd04" (UID: "4e159776-52a9-419f-936f-4a921173dd04"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.655990 5058 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.656041 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wm5f\" (UniqueName: \"kubernetes.io/projected/4e159776-52a9-419f-936f-4a921173dd04-kube-api-access-7wm5f\") on node \"crc\" DevicePath \"\"" Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.656062 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:21:05 crc kubenswrapper[5058]: I1014 09:21:05.656078 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e159776-52a9-419f-936f-4a921173dd04-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:21:06 crc kubenswrapper[5058]: I1014 09:21:06.003344 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" event={"ID":"4e159776-52a9-419f-936f-4a921173dd04","Type":"ContainerDied","Data":"082e1f78fb7b5e765027c3aca9ce0f988804b9a7883801bff4abb75e1e347bf2"} Oct 14 09:21:06 crc kubenswrapper[5058]: I1014 09:21:06.003720 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="082e1f78fb7b5e765027c3aca9ce0f988804b9a7883801bff4abb75e1e347bf2" Oct 14 09:21:06 crc kubenswrapper[5058]: I1014 09:21:06.003377 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.339582 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-p5ck7"] Oct 14 09:21:07 crc kubenswrapper[5058]: E1014 09:21:07.341413 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerName="extract-utilities" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.341565 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerName="extract-utilities" Oct 14 09:21:07 crc kubenswrapper[5058]: E1014 09:21:07.341742 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerName="extract-content" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.341910 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerName="extract-content" Oct 14 09:21:07 crc kubenswrapper[5058]: E1014 09:21:07.342050 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364a6502-0592-42b5-8756-9036150be2df" containerName="extract-utilities" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.342164 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="364a6502-0592-42b5-8756-9036150be2df" containerName="extract-utilities" Oct 14 09:21:07 crc kubenswrapper[5058]: E1014 09:21:07.342282 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e159776-52a9-419f-936f-4a921173dd04" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.342387 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e159776-52a9-419f-936f-4a921173dd04" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Oct 14 09:21:07 crc kubenswrapper[5058]: E1014 09:21:07.342512 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerName="registry-server" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.342616 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerName="registry-server" Oct 14 09:21:07 crc kubenswrapper[5058]: E1014 09:21:07.342742 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364a6502-0592-42b5-8756-9036150be2df" containerName="extract-content" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.342902 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="364a6502-0592-42b5-8756-9036150be2df" containerName="extract-content" Oct 14 09:21:07 crc kubenswrapper[5058]: E1014 09:21:07.343033 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364a6502-0592-42b5-8756-9036150be2df" containerName="registry-server" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.343138 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="364a6502-0592-42b5-8756-9036150be2df" containerName="registry-server" Oct 14 09:21:07 crc kubenswrapper[5058]: E1014 09:21:07.343257 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c0452c-9556-4225-89fd-d5a07ad5fbfa" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.343360 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c0452c-9556-4225-89fd-d5a07ad5fbfa" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.343846 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="364a6502-0592-42b5-8756-9036150be2df" containerName="registry-server" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.344033 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c0452c-9556-4225-89fd-d5a07ad5fbfa" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.344154 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e159776-52a9-419f-936f-4a921173dd04" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.344291 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d75d3f0-64c8-473d-b266-e2ebfaefd212" containerName="registry-server" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.345659 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.348734 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.348920 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.349433 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.352195 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.353990 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell2-dndm2"] Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.356423 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.359852 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.361074 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.373986 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-p5ck7"] Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.381022 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell2-dndm2"] Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.499893 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh9zf\" (UniqueName: \"kubernetes.io/projected/cabd592e-2738-4150-9e93-31f1678307bc-kube-api-access-lh9zf\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.499984 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.500113 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-ssh-key\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.500164 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxms\" (UniqueName: \"kubernetes.io/projected/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-kube-api-access-tzxms\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.500193 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.500491 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-inventory\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.500674 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-inventory\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.500756 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.602394 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-ssh-key\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.602443 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzxms\" (UniqueName: \"kubernetes.io/projected/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-kube-api-access-tzxms\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.602466 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.602539 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-inventory\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.602580 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-inventory\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.602600 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.602657 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh9zf\" (UniqueName: \"kubernetes.io/projected/cabd592e-2738-4150-9e93-31f1678307bc-kube-api-access-lh9zf\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.602687 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.614759 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-ssh-key\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.616172 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.621360 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.623908 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-inventory\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.627333 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzxms\" (UniqueName: \"kubernetes.io/projected/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-kube-api-access-tzxms\") pod \"bootstrap-openstack-openstack-cell2-dndm2\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.629766 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-inventory\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.631895 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.636171 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh9zf\" (UniqueName: \"kubernetes.io/projected/cabd592e-2738-4150-9e93-31f1678307bc-kube-api-access-lh9zf\") pod \"bootstrap-openstack-openstack-cell1-p5ck7\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.687785 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:21:07 crc kubenswrapper[5058]: I1014 09:21:07.702787 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:21:08 crc kubenswrapper[5058]: I1014 09:21:08.287954 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell2-dndm2"] Oct 14 09:21:08 crc kubenswrapper[5058]: I1014 09:21:08.298292 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:21:08 crc kubenswrapper[5058]: I1014 09:21:08.382163 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-p5ck7"] Oct 14 09:21:08 crc kubenswrapper[5058]: W1014 09:21:08.385716 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcabd592e_2738_4150_9e93_31f1678307bc.slice/crio-28f9a29985c68a93d406bcf6a275958b2fa2a704a2f092ebf2f1b9a84e710ab1 WatchSource:0}: Error finding container 28f9a29985c68a93d406bcf6a275958b2fa2a704a2f092ebf2f1b9a84e710ab1: Status 404 returned error can't find the container with id 28f9a29985c68a93d406bcf6a275958b2fa2a704a2f092ebf2f1b9a84e710ab1 Oct 14 09:21:09 crc kubenswrapper[5058]: I1014 09:21:09.050998 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" event={"ID":"cabd592e-2738-4150-9e93-31f1678307bc","Type":"ContainerStarted","Data":"28f9a29985c68a93d406bcf6a275958b2fa2a704a2f092ebf2f1b9a84e710ab1"} Oct 14 09:21:09 crc kubenswrapper[5058]: I1014 09:21:09.054232 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" event={"ID":"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb","Type":"ContainerStarted","Data":"1e73fa2f0e59ae59ce23eef4d477eacd37124167d7e325f49eaa35c3d9b7c9a1"} Oct 14 09:21:10 crc kubenswrapper[5058]: I1014 09:21:10.067114 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" event={"ID":"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb","Type":"ContainerStarted","Data":"f0c359da5b0b93df15cb4bf6cbea6c400d29f49236e4245c1a66868733e2d98c"} Oct 14 09:21:10 crc kubenswrapper[5058]: I1014 09:21:10.069106 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" event={"ID":"cabd592e-2738-4150-9e93-31f1678307bc","Type":"ContainerStarted","Data":"7f1933d7f24a06f54cf1fd887525943ebe0d215ac6f488c98db8e06eeaa1b62b"} Oct 14 09:21:10 crc kubenswrapper[5058]: I1014 09:21:10.093572 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" podStartSLOduration=1.991182505 podStartE2EDuration="3.093551485s" podCreationTimestamp="2025-10-14 09:21:07 +0000 UTC" firstStartedPulling="2025-10-14 09:21:08.298006767 +0000 UTC m=+9216.209090583" lastFinishedPulling="2025-10-14 09:21:09.400375747 +0000 UTC m=+9217.311459563" observedRunningTime="2025-10-14 09:21:10.08742455 +0000 UTC m=+9217.998508356" watchObservedRunningTime="2025-10-14 09:21:10.093551485 +0000 UTC m=+9218.004635291" Oct 14 09:21:10 crc kubenswrapper[5058]: I1014 09:21:10.110026 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" podStartSLOduration=2.089218969 podStartE2EDuration="3.110005554s" podCreationTimestamp="2025-10-14 09:21:07 +0000 UTC" firstStartedPulling="2025-10-14 09:21:08.388410434 +0000 UTC m=+9216.299494280" lastFinishedPulling="2025-10-14 09:21:09.409197049 +0000 UTC m=+9217.320280865" observedRunningTime="2025-10-14 09:21:10.108167891 +0000 UTC m=+9218.019251727" watchObservedRunningTime="2025-10-14 09:21:10.110005554 +0000 UTC m=+9218.021089380" Oct 14 09:21:33 crc kubenswrapper[5058]: I1014 09:21:33.656759 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:21:33 crc kubenswrapper[5058]: I1014 09:21:33.658220 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:21:33 crc kubenswrapper[5058]: I1014 09:21:33.658295 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:21:33 crc kubenswrapper[5058]: I1014 09:21:33.659098 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:21:33 crc kubenswrapper[5058]: I1014 09:21:33.659255 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" gracePeriod=600 Oct 14 09:21:33 crc kubenswrapper[5058]: E1014 09:21:33.784926 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:21:34 crc kubenswrapper[5058]: I1014 09:21:34.384211 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" exitCode=0 Oct 14 09:21:34 crc kubenswrapper[5058]: I1014 09:21:34.384256 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df"} Oct 14 09:21:34 crc kubenswrapper[5058]: I1014 09:21:34.384682 5058 scope.go:117] "RemoveContainer" containerID="cd0cc303249430ec985841e3dee26e8a2bbfcb68a99cfd1c2d52543cd0fb8353" Oct 14 09:21:34 crc kubenswrapper[5058]: I1014 09:21:34.385669 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:21:34 crc kubenswrapper[5058]: E1014 09:21:34.386114 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:21:46 crc kubenswrapper[5058]: I1014 09:21:46.790908 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:21:46 crc kubenswrapper[5058]: E1014 09:21:46.792125 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:21:48 crc kubenswrapper[5058]: I1014 09:21:48.279863 5058 scope.go:117] "RemoveContainer" containerID="094f1668ae6813f24930c204c4a5a739f3982eda15e2d3143b5e396ecd349ea8" Oct 14 09:21:48 crc kubenswrapper[5058]: I1014 09:21:48.323130 5058 scope.go:117] "RemoveContainer" containerID="fe3e201715fb6417d79896120ec3b064eed057574d093ee289b3fb9ba7862cee" Oct 14 09:21:48 crc kubenswrapper[5058]: I1014 09:21:48.405195 5058 scope.go:117] "RemoveContainer" containerID="b697cb8ab58f2904190d487add1505e83c5bc9e8f65aa44b410424d52ca04817" Oct 14 09:21:59 crc kubenswrapper[5058]: I1014 09:21:59.790237 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:21:59 crc kubenswrapper[5058]: E1014 09:21:59.791201 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:22:12 crc kubenswrapper[5058]: I1014 09:22:12.800061 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:22:12 crc kubenswrapper[5058]: E1014 09:22:12.801736 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:22:14 crc kubenswrapper[5058]: I1014 09:22:14.067887 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-fbdrz"] Oct 14 09:22:14 crc kubenswrapper[5058]: I1014 09:22:14.078698 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-fbdrz"] Oct 14 09:22:14 crc kubenswrapper[5058]: I1014 09:22:14.804912 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14878d9d-4680-4800-9951-2af5c7617134" path="/var/lib/kubelet/pods/14878d9d-4680-4800-9951-2af5c7617134/volumes" Oct 14 09:22:23 crc kubenswrapper[5058]: I1014 09:22:23.790735 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:22:23 crc kubenswrapper[5058]: E1014 09:22:23.791574 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:22:25 crc kubenswrapper[5058]: I1014 09:22:25.070143 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-c776-account-create-4s52d"] Oct 14 09:22:25 crc kubenswrapper[5058]: I1014 09:22:25.082208 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-c776-account-create-4s52d"] Oct 14 09:22:26 crc kubenswrapper[5058]: I1014 09:22:26.804839 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231a4daf-37c1-4500-83f1-708c2ac4c2c6" path="/var/lib/kubelet/pods/231a4daf-37c1-4500-83f1-708c2ac4c2c6/volumes" Oct 14 09:22:37 crc kubenswrapper[5058]: I1014 09:22:37.037997 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-nmf2l"] Oct 14 09:22:37 crc kubenswrapper[5058]: I1014 09:22:37.047247 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-nmf2l"] Oct 14 09:22:38 crc kubenswrapper[5058]: I1014 09:22:38.790496 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:22:38 crc kubenswrapper[5058]: E1014 09:22:38.791815 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:22:38 crc kubenswrapper[5058]: I1014 09:22:38.803223 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9a3643-6329-4117-b099-e8b649dc7906" path="/var/lib/kubelet/pods/8f9a3643-6329-4117-b099-e8b649dc7906/volumes" Oct 14 09:22:48 crc kubenswrapper[5058]: I1014 09:22:48.487593 5058 scope.go:117] "RemoveContainer" containerID="187773785e0d4b5865d98996893c53fec9176786e4460ca9e05272deda1c59ba" Oct 14 09:22:48 crc kubenswrapper[5058]: I1014 09:22:48.530430 5058 scope.go:117] "RemoveContainer" containerID="7a9cd5b982c8dc6e3198b3f7084f5cea40228d44d0bc69cbf6f8ec4c502a32ea" Oct 14 09:22:48 crc kubenswrapper[5058]: I1014 09:22:48.596269 5058 scope.go:117] "RemoveContainer" containerID="0b96349128eb9812630db74092622915e513a6ed27f81f17fcce0e583e3ee205" Oct 14 09:22:52 crc kubenswrapper[5058]: I1014 09:22:52.789766 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:22:52 crc kubenswrapper[5058]: E1014 09:22:52.790667 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:23:02 crc kubenswrapper[5058]: I1014 09:23:02.905403 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n77v5"] Oct 14 09:23:02 crc kubenswrapper[5058]: I1014 09:23:02.910267 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:02 crc kubenswrapper[5058]: I1014 09:23:02.923144 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n77v5"] Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.044576 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfcbv\" (UniqueName: \"kubernetes.io/projected/0d128a46-15d3-4a75-aa42-c27e9e19a758-kube-api-access-mfcbv\") pod \"redhat-operators-n77v5\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.044630 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-catalog-content\") pod \"redhat-operators-n77v5\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.048224 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-utilities\") pod \"redhat-operators-n77v5\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.149979 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-utilities\") pod \"redhat-operators-n77v5\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.150305 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfcbv\" (UniqueName: \"kubernetes.io/projected/0d128a46-15d3-4a75-aa42-c27e9e19a758-kube-api-access-mfcbv\") pod \"redhat-operators-n77v5\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.150325 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-catalog-content\") pod \"redhat-operators-n77v5\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.150721 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-catalog-content\") pod \"redhat-operators-n77v5\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.150717 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-utilities\") pod \"redhat-operators-n77v5\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.167467 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfcbv\" (UniqueName: \"kubernetes.io/projected/0d128a46-15d3-4a75-aa42-c27e9e19a758-kube-api-access-mfcbv\") pod \"redhat-operators-n77v5\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.263365 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.728837 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n77v5"] Oct 14 09:23:03 crc kubenswrapper[5058]: I1014 09:23:03.790038 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:23:03 crc kubenswrapper[5058]: E1014 09:23:03.790443 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:23:04 crc kubenswrapper[5058]: I1014 09:23:04.537526 5058 generic.go:334] "Generic (PLEG): container finished" podID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerID="0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345" exitCode=0 Oct 14 09:23:04 crc kubenswrapper[5058]: I1014 09:23:04.537667 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n77v5" event={"ID":"0d128a46-15d3-4a75-aa42-c27e9e19a758","Type":"ContainerDied","Data":"0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345"} Oct 14 09:23:04 crc kubenswrapper[5058]: I1014 09:23:04.538016 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n77v5" event={"ID":"0d128a46-15d3-4a75-aa42-c27e9e19a758","Type":"ContainerStarted","Data":"1f1458449ec7471d10f41dea5718a700ea53adef14d25417c607c0bc0d69ef8c"} Oct 14 09:23:06 crc kubenswrapper[5058]: I1014 09:23:06.572010 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n77v5" event={"ID":"0d128a46-15d3-4a75-aa42-c27e9e19a758","Type":"ContainerStarted","Data":"7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291"} Oct 14 09:23:09 crc kubenswrapper[5058]: I1014 09:23:09.613959 5058 generic.go:334] "Generic (PLEG): container finished" podID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerID="7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291" exitCode=0 Oct 14 09:23:09 crc kubenswrapper[5058]: I1014 09:23:09.614071 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n77v5" event={"ID":"0d128a46-15d3-4a75-aa42-c27e9e19a758","Type":"ContainerDied","Data":"7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291"} Oct 14 09:23:10 crc kubenswrapper[5058]: I1014 09:23:10.631094 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n77v5" event={"ID":"0d128a46-15d3-4a75-aa42-c27e9e19a758","Type":"ContainerStarted","Data":"bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af"} Oct 14 09:23:10 crc kubenswrapper[5058]: I1014 09:23:10.666390 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n77v5" podStartSLOduration=2.912763021 podStartE2EDuration="8.666369701s" podCreationTimestamp="2025-10-14 09:23:02 +0000 UTC" firstStartedPulling="2025-10-14 09:23:04.539464911 +0000 UTC m=+9332.450548717" lastFinishedPulling="2025-10-14 09:23:10.293071551 +0000 UTC m=+9338.204155397" observedRunningTime="2025-10-14 09:23:10.653828764 +0000 UTC m=+9338.564912580" watchObservedRunningTime="2025-10-14 09:23:10.666369701 +0000 UTC m=+9338.577453527" Oct 14 09:23:13 crc kubenswrapper[5058]: I1014 09:23:13.262165 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:13 crc kubenswrapper[5058]: I1014 09:23:13.263511 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:14 crc kubenswrapper[5058]: I1014 09:23:14.341649 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n77v5" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerName="registry-server" probeResult="failure" output=< Oct 14 09:23:14 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 09:23:14 crc kubenswrapper[5058]: > Oct 14 09:23:14 crc kubenswrapper[5058]: I1014 09:23:14.790153 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:23:14 crc kubenswrapper[5058]: E1014 09:23:14.790467 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:23:23 crc kubenswrapper[5058]: I1014 09:23:23.329590 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:23 crc kubenswrapper[5058]: I1014 09:23:23.400699 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:23 crc kubenswrapper[5058]: I1014 09:23:23.571945 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n77v5"] Oct 14 09:23:24 crc kubenswrapper[5058]: I1014 09:23:24.784724 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n77v5" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerName="registry-server" containerID="cri-o://bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af" gracePeriod=2 Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.421557 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.586292 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-utilities\") pod \"0d128a46-15d3-4a75-aa42-c27e9e19a758\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.586354 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfcbv\" (UniqueName: \"kubernetes.io/projected/0d128a46-15d3-4a75-aa42-c27e9e19a758-kube-api-access-mfcbv\") pod \"0d128a46-15d3-4a75-aa42-c27e9e19a758\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.586395 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-catalog-content\") pod \"0d128a46-15d3-4a75-aa42-c27e9e19a758\" (UID: \"0d128a46-15d3-4a75-aa42-c27e9e19a758\") " Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.587580 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-utilities" (OuterVolumeSpecName: "utilities") pod "0d128a46-15d3-4a75-aa42-c27e9e19a758" (UID: "0d128a46-15d3-4a75-aa42-c27e9e19a758"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.598011 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d128a46-15d3-4a75-aa42-c27e9e19a758-kube-api-access-mfcbv" (OuterVolumeSpecName: "kube-api-access-mfcbv") pod "0d128a46-15d3-4a75-aa42-c27e9e19a758" (UID: "0d128a46-15d3-4a75-aa42-c27e9e19a758"). InnerVolumeSpecName "kube-api-access-mfcbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.679755 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d128a46-15d3-4a75-aa42-c27e9e19a758" (UID: "0d128a46-15d3-4a75-aa42-c27e9e19a758"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.689476 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.689508 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfcbv\" (UniqueName: \"kubernetes.io/projected/0d128a46-15d3-4a75-aa42-c27e9e19a758-kube-api-access-mfcbv\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.689522 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d128a46-15d3-4a75-aa42-c27e9e19a758-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.794488 5058 generic.go:334] "Generic (PLEG): container finished" podID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerID="bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af" exitCode=0 Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.794526 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n77v5" event={"ID":"0d128a46-15d3-4a75-aa42-c27e9e19a758","Type":"ContainerDied","Data":"bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af"} Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.794546 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n77v5" event={"ID":"0d128a46-15d3-4a75-aa42-c27e9e19a758","Type":"ContainerDied","Data":"1f1458449ec7471d10f41dea5718a700ea53adef14d25417c607c0bc0d69ef8c"} Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.794562 5058 scope.go:117] "RemoveContainer" containerID="bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.794602 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n77v5" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.814462 5058 scope.go:117] "RemoveContainer" containerID="7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.842227 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n77v5"] Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.843272 5058 scope.go:117] "RemoveContainer" containerID="0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.851892 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n77v5"] Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.913931 5058 scope.go:117] "RemoveContainer" containerID="bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af" Oct 14 09:23:25 crc kubenswrapper[5058]: E1014 09:23:25.914500 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af\": container with ID starting with bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af not found: ID does not exist" containerID="bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.914545 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af"} err="failed to get container status \"bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af\": rpc error: code = NotFound desc = could not find container \"bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af\": container with ID starting with bb929a0de0578f8cce5f9f90d8514160902e5b5e8daba2de403bd448ea2f02af not found: ID does not exist" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.914573 5058 scope.go:117] "RemoveContainer" containerID="7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291" Oct 14 09:23:25 crc kubenswrapper[5058]: E1014 09:23:25.917714 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291\": container with ID starting with 7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291 not found: ID does not exist" containerID="7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.917749 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291"} err="failed to get container status \"7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291\": rpc error: code = NotFound desc = could not find container \"7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291\": container with ID starting with 7ebfa6f517bc027775343364979d79ce3ec132f6b2c3700dae1e71d21051a291 not found: ID does not exist" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.917767 5058 scope.go:117] "RemoveContainer" containerID="0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345" Oct 14 09:23:25 crc kubenswrapper[5058]: E1014 09:23:25.918188 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345\": container with ID starting with 0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345 not found: ID does not exist" containerID="0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345" Oct 14 09:23:25 crc kubenswrapper[5058]: I1014 09:23:25.918213 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345"} err="failed to get container status \"0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345\": rpc error: code = NotFound desc = could not find container \"0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345\": container with ID starting with 0e7057ce0c5b6ceee28f000a8245320f2846c9c1e53daf2d0eea901acca03345 not found: ID does not exist" Oct 14 09:23:26 crc kubenswrapper[5058]: I1014 09:23:26.790787 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:23:26 crc kubenswrapper[5058]: E1014 09:23:26.791848 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:23:26 crc kubenswrapper[5058]: I1014 09:23:26.823665 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" path="/var/lib/kubelet/pods/0d128a46-15d3-4a75-aa42-c27e9e19a758/volumes" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.179419 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9qldj"] Oct 14 09:23:40 crc kubenswrapper[5058]: E1014 09:23:40.180645 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerName="registry-server" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.180667 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerName="registry-server" Oct 14 09:23:40 crc kubenswrapper[5058]: E1014 09:23:40.180684 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerName="extract-content" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.180696 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerName="extract-content" Oct 14 09:23:40 crc kubenswrapper[5058]: E1014 09:23:40.180741 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerName="extract-utilities" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.180755 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerName="extract-utilities" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.181165 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d128a46-15d3-4a75-aa42-c27e9e19a758" containerName="registry-server" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.186925 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.234906 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qldj"] Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.373644 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-utilities\") pod \"redhat-marketplace-9qldj\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.373714 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-catalog-content\") pod \"redhat-marketplace-9qldj\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.373731 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcwj7\" (UniqueName: \"kubernetes.io/projected/553a18da-c422-4854-8a27-92c95d4699bc-kube-api-access-xcwj7\") pod \"redhat-marketplace-9qldj\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.477719 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-utilities\") pod \"redhat-marketplace-9qldj\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.478082 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-catalog-content\") pod \"redhat-marketplace-9qldj\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.478207 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcwj7\" (UniqueName: \"kubernetes.io/projected/553a18da-c422-4854-8a27-92c95d4699bc-kube-api-access-xcwj7\") pod \"redhat-marketplace-9qldj\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.478440 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-catalog-content\") pod \"redhat-marketplace-9qldj\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.478250 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-utilities\") pod \"redhat-marketplace-9qldj\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.501806 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcwj7\" (UniqueName: \"kubernetes.io/projected/553a18da-c422-4854-8a27-92c95d4699bc-kube-api-access-xcwj7\") pod \"redhat-marketplace-9qldj\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:40 crc kubenswrapper[5058]: I1014 09:23:40.580287 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:41 crc kubenswrapper[5058]: I1014 09:23:41.068971 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qldj"] Oct 14 09:23:41 crc kubenswrapper[5058]: W1014 09:23:41.075920 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod553a18da_c422_4854_8a27_92c95d4699bc.slice/crio-e549ab5b0070eebf7f9af963ed9025b34445fed0e446565c0751951d253b5786 WatchSource:0}: Error finding container e549ab5b0070eebf7f9af963ed9025b34445fed0e446565c0751951d253b5786: Status 404 returned error can't find the container with id e549ab5b0070eebf7f9af963ed9025b34445fed0e446565c0751951d253b5786 Oct 14 09:23:41 crc kubenswrapper[5058]: I1014 09:23:41.790354 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:23:41 crc kubenswrapper[5058]: E1014 09:23:41.791031 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:23:42 crc kubenswrapper[5058]: I1014 09:23:42.003592 5058 generic.go:334] "Generic (PLEG): container finished" podID="553a18da-c422-4854-8a27-92c95d4699bc" containerID="195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1" exitCode=0 Oct 14 09:23:42 crc kubenswrapper[5058]: I1014 09:23:42.003636 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qldj" event={"ID":"553a18da-c422-4854-8a27-92c95d4699bc","Type":"ContainerDied","Data":"195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1"} Oct 14 09:23:42 crc kubenswrapper[5058]: I1014 09:23:42.003706 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qldj" event={"ID":"553a18da-c422-4854-8a27-92c95d4699bc","Type":"ContainerStarted","Data":"e549ab5b0070eebf7f9af963ed9025b34445fed0e446565c0751951d253b5786"} Oct 14 09:23:44 crc kubenswrapper[5058]: I1014 09:23:44.037711 5058 generic.go:334] "Generic (PLEG): container finished" podID="553a18da-c422-4854-8a27-92c95d4699bc" containerID="f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2" exitCode=0 Oct 14 09:23:44 crc kubenswrapper[5058]: I1014 09:23:44.037823 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qldj" event={"ID":"553a18da-c422-4854-8a27-92c95d4699bc","Type":"ContainerDied","Data":"f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2"} Oct 14 09:23:45 crc kubenswrapper[5058]: I1014 09:23:45.048960 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qldj" event={"ID":"553a18da-c422-4854-8a27-92c95d4699bc","Type":"ContainerStarted","Data":"c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb"} Oct 14 09:23:45 crc kubenswrapper[5058]: I1014 09:23:45.076073 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9qldj" podStartSLOduration=2.323831329 podStartE2EDuration="5.076052553s" podCreationTimestamp="2025-10-14 09:23:40 +0000 UTC" firstStartedPulling="2025-10-14 09:23:42.005502265 +0000 UTC m=+9369.916586071" lastFinishedPulling="2025-10-14 09:23:44.757723459 +0000 UTC m=+9372.668807295" observedRunningTime="2025-10-14 09:23:45.068425045 +0000 UTC m=+9372.979508851" watchObservedRunningTime="2025-10-14 09:23:45.076052553 +0000 UTC m=+9372.987136379" Oct 14 09:23:50 crc kubenswrapper[5058]: I1014 09:23:50.581084 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:50 crc kubenswrapper[5058]: I1014 09:23:50.581616 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:50 crc kubenswrapper[5058]: I1014 09:23:50.645668 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:51 crc kubenswrapper[5058]: I1014 09:23:51.187580 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:51 crc kubenswrapper[5058]: I1014 09:23:51.258862 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qldj"] Oct 14 09:23:52 crc kubenswrapper[5058]: I1014 09:23:52.802095 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:23:52 crc kubenswrapper[5058]: E1014 09:23:52.802599 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.147874 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9qldj" podUID="553a18da-c422-4854-8a27-92c95d4699bc" containerName="registry-server" containerID="cri-o://c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb" gracePeriod=2 Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.675108 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.772047 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcwj7\" (UniqueName: \"kubernetes.io/projected/553a18da-c422-4854-8a27-92c95d4699bc-kube-api-access-xcwj7\") pod \"553a18da-c422-4854-8a27-92c95d4699bc\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.772253 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-utilities\") pod \"553a18da-c422-4854-8a27-92c95d4699bc\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.772348 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-catalog-content\") pod \"553a18da-c422-4854-8a27-92c95d4699bc\" (UID: \"553a18da-c422-4854-8a27-92c95d4699bc\") " Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.773148 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-utilities" (OuterVolumeSpecName: "utilities") pod "553a18da-c422-4854-8a27-92c95d4699bc" (UID: "553a18da-c422-4854-8a27-92c95d4699bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.777234 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553a18da-c422-4854-8a27-92c95d4699bc-kube-api-access-xcwj7" (OuterVolumeSpecName: "kube-api-access-xcwj7") pod "553a18da-c422-4854-8a27-92c95d4699bc" (UID: "553a18da-c422-4854-8a27-92c95d4699bc"). InnerVolumeSpecName "kube-api-access-xcwj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.784247 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "553a18da-c422-4854-8a27-92c95d4699bc" (UID: "553a18da-c422-4854-8a27-92c95d4699bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.875029 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.875078 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcwj7\" (UniqueName: \"kubernetes.io/projected/553a18da-c422-4854-8a27-92c95d4699bc-kube-api-access-xcwj7\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:53 crc kubenswrapper[5058]: I1014 09:23:53.875090 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/553a18da-c422-4854-8a27-92c95d4699bc-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.163014 5058 generic.go:334] "Generic (PLEG): container finished" podID="553a18da-c422-4854-8a27-92c95d4699bc" containerID="c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb" exitCode=0 Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.163104 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qldj" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.163095 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qldj" event={"ID":"553a18da-c422-4854-8a27-92c95d4699bc","Type":"ContainerDied","Data":"c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb"} Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.163589 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qldj" event={"ID":"553a18da-c422-4854-8a27-92c95d4699bc","Type":"ContainerDied","Data":"e549ab5b0070eebf7f9af963ed9025b34445fed0e446565c0751951d253b5786"} Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.163624 5058 scope.go:117] "RemoveContainer" containerID="c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.167690 5058 generic.go:334] "Generic (PLEG): container finished" podID="cabd592e-2738-4150-9e93-31f1678307bc" containerID="7f1933d7f24a06f54cf1fd887525943ebe0d215ac6f488c98db8e06eeaa1b62b" exitCode=0 Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.167739 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" event={"ID":"cabd592e-2738-4150-9e93-31f1678307bc","Type":"ContainerDied","Data":"7f1933d7f24a06f54cf1fd887525943ebe0d215ac6f488c98db8e06eeaa1b62b"} Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.205635 5058 scope.go:117] "RemoveContainer" containerID="f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.226843 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qldj"] Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.239081 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qldj"] Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.809137 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553a18da-c422-4854-8a27-92c95d4699bc" path="/var/lib/kubelet/pods/553a18da-c422-4854-8a27-92c95d4699bc/volumes" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.925229 5058 scope.go:117] "RemoveContainer" containerID="195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.980511 5058 scope.go:117] "RemoveContainer" containerID="c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb" Oct 14 09:23:54 crc kubenswrapper[5058]: E1014 09:23:54.981292 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb\": container with ID starting with c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb not found: ID does not exist" containerID="c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.981336 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb"} err="failed to get container status \"c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb\": rpc error: code = NotFound desc = could not find container \"c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb\": container with ID starting with c35aa6e69d3640c8ceedd656be17446e36d8896f54b42a7fccf069cbeb6c6acb not found: ID does not exist" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.981365 5058 scope.go:117] "RemoveContainer" containerID="f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2" Oct 14 09:23:54 crc kubenswrapper[5058]: E1014 09:23:54.982017 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2\": container with ID starting with f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2 not found: ID does not exist" containerID="f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.982070 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2"} err="failed to get container status \"f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2\": rpc error: code = NotFound desc = could not find container \"f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2\": container with ID starting with f149ababaa3a922120c5d30b2f20fd5c4a742752704c2e2d716a065935b755b2 not found: ID does not exist" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.982106 5058 scope.go:117] "RemoveContainer" containerID="195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1" Oct 14 09:23:54 crc kubenswrapper[5058]: E1014 09:23:54.982555 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1\": container with ID starting with 195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1 not found: ID does not exist" containerID="195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1" Oct 14 09:23:54 crc kubenswrapper[5058]: I1014 09:23:54.982594 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1"} err="failed to get container status \"195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1\": rpc error: code = NotFound desc = could not find container \"195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1\": container with ID starting with 195ce0d16af1af54ebf10f899f79b502f5dddeeed5939a84f3f2c610c56237d1 not found: ID does not exist" Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.757054 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.839440 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh9zf\" (UniqueName: \"kubernetes.io/projected/cabd592e-2738-4150-9e93-31f1678307bc-kube-api-access-lh9zf\") pod \"cabd592e-2738-4150-9e93-31f1678307bc\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.839505 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-bootstrap-combined-ca-bundle\") pod \"cabd592e-2738-4150-9e93-31f1678307bc\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.839776 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-ssh-key\") pod \"cabd592e-2738-4150-9e93-31f1678307bc\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.840004 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-inventory\") pod \"cabd592e-2738-4150-9e93-31f1678307bc\" (UID: \"cabd592e-2738-4150-9e93-31f1678307bc\") " Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.845281 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cabd592e-2738-4150-9e93-31f1678307bc-kube-api-access-lh9zf" (OuterVolumeSpecName: "kube-api-access-lh9zf") pod "cabd592e-2738-4150-9e93-31f1678307bc" (UID: "cabd592e-2738-4150-9e93-31f1678307bc"). InnerVolumeSpecName "kube-api-access-lh9zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.854847 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "cabd592e-2738-4150-9e93-31f1678307bc" (UID: "cabd592e-2738-4150-9e93-31f1678307bc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.889320 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cabd592e-2738-4150-9e93-31f1678307bc" (UID: "cabd592e-2738-4150-9e93-31f1678307bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.891512 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-inventory" (OuterVolumeSpecName: "inventory") pod "cabd592e-2738-4150-9e93-31f1678307bc" (UID: "cabd592e-2738-4150-9e93-31f1678307bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.944790 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.944846 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.944867 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh9zf\" (UniqueName: \"kubernetes.io/projected/cabd592e-2738-4150-9e93-31f1678307bc-kube-api-access-lh9zf\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:55 crc kubenswrapper[5058]: I1014 09:23:55.944881 5058 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabd592e-2738-4150-9e93-31f1678307bc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.202638 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" event={"ID":"cabd592e-2738-4150-9e93-31f1678307bc","Type":"ContainerDied","Data":"28f9a29985c68a93d406bcf6a275958b2fa2a704a2f092ebf2f1b9a84e710ab1"} Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.202672 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-p5ck7" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.202734 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28f9a29985c68a93d406bcf6a275958b2fa2a704a2f092ebf2f1b9a84e710ab1" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.206489 5058 generic.go:334] "Generic (PLEG): container finished" podID="f4e4eb3f-93d2-425e-b655-ec4a3c2649bb" containerID="f0c359da5b0b93df15cb4bf6cbea6c400d29f49236e4245c1a66868733e2d98c" exitCode=0 Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.206555 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" event={"ID":"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb","Type":"ContainerDied","Data":"f0c359da5b0b93df15cb4bf6cbea6c400d29f49236e4245c1a66868733e2d98c"} Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.345389 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-qln66"] Oct 14 09:23:56 crc kubenswrapper[5058]: E1014 09:23:56.345895 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553a18da-c422-4854-8a27-92c95d4699bc" containerName="registry-server" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.345913 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="553a18da-c422-4854-8a27-92c95d4699bc" containerName="registry-server" Oct 14 09:23:56 crc kubenswrapper[5058]: E1014 09:23:56.345931 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553a18da-c422-4854-8a27-92c95d4699bc" containerName="extract-utilities" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.345939 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="553a18da-c422-4854-8a27-92c95d4699bc" containerName="extract-utilities" Oct 14 09:23:56 crc kubenswrapper[5058]: E1014 09:23:56.345968 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabd592e-2738-4150-9e93-31f1678307bc" containerName="bootstrap-openstack-openstack-cell1" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.345976 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabd592e-2738-4150-9e93-31f1678307bc" containerName="bootstrap-openstack-openstack-cell1" Oct 14 09:23:56 crc kubenswrapper[5058]: E1014 09:23:56.345987 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553a18da-c422-4854-8a27-92c95d4699bc" containerName="extract-content" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.345994 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="553a18da-c422-4854-8a27-92c95d4699bc" containerName="extract-content" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.346264 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="553a18da-c422-4854-8a27-92c95d4699bc" containerName="registry-server" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.346291 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="cabd592e-2738-4150-9e93-31f1678307bc" containerName="bootstrap-openstack-openstack-cell1" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.347121 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.350747 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.351077 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.357133 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-qln66"] Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.459080 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-ssh-key\") pod \"download-cache-openstack-openstack-cell1-qln66\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.459475 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-inventory\") pod \"download-cache-openstack-openstack-cell1-qln66\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.459683 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r28w\" (UniqueName: \"kubernetes.io/projected/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-kube-api-access-6r28w\") pod \"download-cache-openstack-openstack-cell1-qln66\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.560810 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r28w\" (UniqueName: \"kubernetes.io/projected/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-kube-api-access-6r28w\") pod \"download-cache-openstack-openstack-cell1-qln66\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.560985 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-ssh-key\") pod \"download-cache-openstack-openstack-cell1-qln66\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.561066 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-inventory\") pod \"download-cache-openstack-openstack-cell1-qln66\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.573697 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-inventory\") pod \"download-cache-openstack-openstack-cell1-qln66\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.574358 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-ssh-key\") pod \"download-cache-openstack-openstack-cell1-qln66\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.576780 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r28w\" (UniqueName: \"kubernetes.io/projected/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-kube-api-access-6r28w\") pod \"download-cache-openstack-openstack-cell1-qln66\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:56 crc kubenswrapper[5058]: I1014 09:23:56.670501 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.306176 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-qln66"] Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.818162 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.897035 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-inventory\") pod \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.897132 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-ssh-key\") pod \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.897155 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-bootstrap-combined-ca-bundle\") pod \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.897412 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzxms\" (UniqueName: \"kubernetes.io/projected/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-kube-api-access-tzxms\") pod \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\" (UID: \"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb\") " Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.903667 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-kube-api-access-tzxms" (OuterVolumeSpecName: "kube-api-access-tzxms") pod "f4e4eb3f-93d2-425e-b655-ec4a3c2649bb" (UID: "f4e4eb3f-93d2-425e-b655-ec4a3c2649bb"). InnerVolumeSpecName "kube-api-access-tzxms". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.904929 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f4e4eb3f-93d2-425e-b655-ec4a3c2649bb" (UID: "f4e4eb3f-93d2-425e-b655-ec4a3c2649bb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.938087 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-inventory" (OuterVolumeSpecName: "inventory") pod "f4e4eb3f-93d2-425e-b655-ec4a3c2649bb" (UID: "f4e4eb3f-93d2-425e-b655-ec4a3c2649bb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.941106 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f4e4eb3f-93d2-425e-b655-ec4a3c2649bb" (UID: "f4e4eb3f-93d2-425e-b655-ec4a3c2649bb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.999732 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzxms\" (UniqueName: \"kubernetes.io/projected/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-kube-api-access-tzxms\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.999766 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.999775 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:57 crc kubenswrapper[5058]: I1014 09:23:57.999786 5058 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e4eb3f-93d2-425e-b655-ec4a3c2649bb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.231100 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-qln66" event={"ID":"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df","Type":"ContainerStarted","Data":"272c89aaa837c4377c61ba8f48ee9db0c6561418fc0189ae8612648bf1fe6ab9"} Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.231139 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-qln66" event={"ID":"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df","Type":"ContainerStarted","Data":"cecf84a6b2c4c9af489496fa52378030151bd3636b13242f0eb6fff65cc99d2a"} Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.240338 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" event={"ID":"f4e4eb3f-93d2-425e-b655-ec4a3c2649bb","Type":"ContainerDied","Data":"1e73fa2f0e59ae59ce23eef4d477eacd37124167d7e325f49eaa35c3d9b7c9a1"} Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.240370 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e73fa2f0e59ae59ce23eef4d477eacd37124167d7e325f49eaa35c3d9b7c9a1" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.240429 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell2-dndm2" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.264920 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-qln66" podStartSLOduration=1.734312799 podStartE2EDuration="2.264888802s" podCreationTimestamp="2025-10-14 09:23:56 +0000 UTC" firstStartedPulling="2025-10-14 09:23:57.406848536 +0000 UTC m=+9385.317932352" lastFinishedPulling="2025-10-14 09:23:57.937424539 +0000 UTC m=+9385.848508355" observedRunningTime="2025-10-14 09:23:58.249861604 +0000 UTC m=+9386.160945420" watchObservedRunningTime="2025-10-14 09:23:58.264888802 +0000 UTC m=+9386.175972608" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.308676 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell2-q5n4f"] Oct 14 09:23:58 crc kubenswrapper[5058]: E1014 09:23:58.309331 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e4eb3f-93d2-425e-b655-ec4a3c2649bb" containerName="bootstrap-openstack-openstack-cell2" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.309350 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e4eb3f-93d2-425e-b655-ec4a3c2649bb" containerName="bootstrap-openstack-openstack-cell2" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.309724 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e4eb3f-93d2-425e-b655-ec4a3c2649bb" containerName="bootstrap-openstack-openstack-cell2" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.310731 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.312701 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.313460 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.318331 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell2-q5n4f"] Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.407413 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-ssh-key\") pod \"download-cache-openstack-openstack-cell2-q5n4f\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.407463 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-inventory\") pod \"download-cache-openstack-openstack-cell2-q5n4f\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.407546 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxmx\" (UniqueName: \"kubernetes.io/projected/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-kube-api-access-wvxmx\") pod \"download-cache-openstack-openstack-cell2-q5n4f\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.509990 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-ssh-key\") pod \"download-cache-openstack-openstack-cell2-q5n4f\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.510075 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-inventory\") pod \"download-cache-openstack-openstack-cell2-q5n4f\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.510244 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxmx\" (UniqueName: \"kubernetes.io/projected/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-kube-api-access-wvxmx\") pod \"download-cache-openstack-openstack-cell2-q5n4f\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.516052 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-inventory\") pod \"download-cache-openstack-openstack-cell2-q5n4f\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.516345 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-ssh-key\") pod \"download-cache-openstack-openstack-cell2-q5n4f\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.532673 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxmx\" (UniqueName: \"kubernetes.io/projected/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-kube-api-access-wvxmx\") pod \"download-cache-openstack-openstack-cell2-q5n4f\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:58 crc kubenswrapper[5058]: I1014 09:23:58.645852 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:23:59 crc kubenswrapper[5058]: I1014 09:23:59.235971 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell2-q5n4f"] Oct 14 09:23:59 crc kubenswrapper[5058]: W1014 09:23:59.242934 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcd39a4d_1aa8_41e4_8c0d_7ab02860dad7.slice/crio-40cf7c0633eec9d4c660dc87ea095317d02ad686c0e98929f6cdf501e9c77c15 WatchSource:0}: Error finding container 40cf7c0633eec9d4c660dc87ea095317d02ad686c0e98929f6cdf501e9c77c15: Status 404 returned error can't find the container with id 40cf7c0633eec9d4c660dc87ea095317d02ad686c0e98929f6cdf501e9c77c15 Oct 14 09:24:00 crc kubenswrapper[5058]: I1014 09:24:00.271012 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" event={"ID":"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7","Type":"ContainerStarted","Data":"6ad63e2033e427b5ba9a2da3fc4afb092a36e36963d40a1bbbd2731c971b6395"} Oct 14 09:24:00 crc kubenswrapper[5058]: I1014 09:24:00.271450 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" event={"ID":"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7","Type":"ContainerStarted","Data":"40cf7c0633eec9d4c660dc87ea095317d02ad686c0e98929f6cdf501e9c77c15"} Oct 14 09:24:00 crc kubenswrapper[5058]: I1014 09:24:00.296934 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" podStartSLOduration=1.7865720729999999 podStartE2EDuration="2.296915369s" podCreationTimestamp="2025-10-14 09:23:58 +0000 UTC" firstStartedPulling="2025-10-14 09:23:59.246530292 +0000 UTC m=+9387.157614108" lastFinishedPulling="2025-10-14 09:23:59.756873588 +0000 UTC m=+9387.667957404" observedRunningTime="2025-10-14 09:24:00.291302829 +0000 UTC m=+9388.202386675" watchObservedRunningTime="2025-10-14 09:24:00.296915369 +0000 UTC m=+9388.207999185" Oct 14 09:24:06 crc kubenswrapper[5058]: I1014 09:24:06.791298 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:24:06 crc kubenswrapper[5058]: E1014 09:24:06.792759 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:24:19 crc kubenswrapper[5058]: I1014 09:24:19.791370 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:24:19 crc kubenswrapper[5058]: E1014 09:24:19.792519 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:24:33 crc kubenswrapper[5058]: I1014 09:24:33.791372 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:24:33 crc kubenswrapper[5058]: E1014 09:24:33.792193 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:24:44 crc kubenswrapper[5058]: I1014 09:24:44.790954 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:24:44 crc kubenswrapper[5058]: E1014 09:24:44.792080 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:24:57 crc kubenswrapper[5058]: I1014 09:24:57.791216 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:24:57 crc kubenswrapper[5058]: E1014 09:24:57.792546 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:25:10 crc kubenswrapper[5058]: I1014 09:25:10.791011 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:25:10 crc kubenswrapper[5058]: E1014 09:25:10.793497 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.446528 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gbrr8"] Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.452908 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.485062 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbrr8"] Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.630890 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggp7f\" (UniqueName: \"kubernetes.io/projected/9318b746-5455-46d0-a2bb-b717bf93735f-kube-api-access-ggp7f\") pod \"certified-operators-gbrr8\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.631388 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-catalog-content\") pod \"certified-operators-gbrr8\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.631465 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-utilities\") pod \"certified-operators-gbrr8\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.733430 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggp7f\" (UniqueName: \"kubernetes.io/projected/9318b746-5455-46d0-a2bb-b717bf93735f-kube-api-access-ggp7f\") pod \"certified-operators-gbrr8\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.733529 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-catalog-content\") pod \"certified-operators-gbrr8\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.733589 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-utilities\") pod \"certified-operators-gbrr8\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.734170 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-utilities\") pod \"certified-operators-gbrr8\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.734325 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-catalog-content\") pod \"certified-operators-gbrr8\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.762870 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggp7f\" (UniqueName: \"kubernetes.io/projected/9318b746-5455-46d0-a2bb-b717bf93735f-kube-api-access-ggp7f\") pod \"certified-operators-gbrr8\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.790739 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:25:23 crc kubenswrapper[5058]: E1014 09:25:23.791125 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:25:23 crc kubenswrapper[5058]: I1014 09:25:23.800993 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:24 crc kubenswrapper[5058]: I1014 09:25:24.360535 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbrr8"] Oct 14 09:25:25 crc kubenswrapper[5058]: I1014 09:25:25.335788 5058 generic.go:334] "Generic (PLEG): container finished" podID="9318b746-5455-46d0-a2bb-b717bf93735f" containerID="39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae" exitCode=0 Oct 14 09:25:25 crc kubenswrapper[5058]: I1014 09:25:25.335911 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbrr8" event={"ID":"9318b746-5455-46d0-a2bb-b717bf93735f","Type":"ContainerDied","Data":"39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae"} Oct 14 09:25:25 crc kubenswrapper[5058]: I1014 09:25:25.336227 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbrr8" event={"ID":"9318b746-5455-46d0-a2bb-b717bf93735f","Type":"ContainerStarted","Data":"fbd423b7318c9bf1b859a59e0e5572c3d937296df26cd1e33f43570c61df6148"} Oct 14 09:25:26 crc kubenswrapper[5058]: I1014 09:25:26.349997 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbrr8" event={"ID":"9318b746-5455-46d0-a2bb-b717bf93735f","Type":"ContainerStarted","Data":"934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2"} Oct 14 09:25:27 crc kubenswrapper[5058]: E1014 09:25:27.525967 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9318b746_5455_46d0_a2bb_b717bf93735f.slice/crio-conmon-934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9318b746_5455_46d0_a2bb_b717bf93735f.slice/crio-934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2.scope\": RecentStats: unable to find data in memory cache]" Oct 14 09:25:28 crc kubenswrapper[5058]: I1014 09:25:28.379016 5058 generic.go:334] "Generic (PLEG): container finished" podID="9318b746-5455-46d0-a2bb-b717bf93735f" containerID="934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2" exitCode=0 Oct 14 09:25:28 crc kubenswrapper[5058]: I1014 09:25:28.379081 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbrr8" event={"ID":"9318b746-5455-46d0-a2bb-b717bf93735f","Type":"ContainerDied","Data":"934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2"} Oct 14 09:25:29 crc kubenswrapper[5058]: I1014 09:25:29.397572 5058 generic.go:334] "Generic (PLEG): container finished" podID="e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df" containerID="272c89aaa837c4377c61ba8f48ee9db0c6561418fc0189ae8612648bf1fe6ab9" exitCode=0 Oct 14 09:25:29 crc kubenswrapper[5058]: I1014 09:25:29.397697 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-qln66" event={"ID":"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df","Type":"ContainerDied","Data":"272c89aaa837c4377c61ba8f48ee9db0c6561418fc0189ae8612648bf1fe6ab9"} Oct 14 09:25:30 crc kubenswrapper[5058]: I1014 09:25:30.415299 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbrr8" event={"ID":"9318b746-5455-46d0-a2bb-b717bf93735f","Type":"ContainerStarted","Data":"9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d"} Oct 14 09:25:30 crc kubenswrapper[5058]: I1014 09:25:30.452143 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gbrr8" podStartSLOduration=2.933321115 podStartE2EDuration="7.452117329s" podCreationTimestamp="2025-10-14 09:25:23 +0000 UTC" firstStartedPulling="2025-10-14 09:25:25.33881114 +0000 UTC m=+9473.249894956" lastFinishedPulling="2025-10-14 09:25:29.857607324 +0000 UTC m=+9477.768691170" observedRunningTime="2025-10-14 09:25:30.439135919 +0000 UTC m=+9478.350219735" watchObservedRunningTime="2025-10-14 09:25:30.452117329 +0000 UTC m=+9478.363201155" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.003332 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.029466 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-inventory\") pod \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.029563 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-ssh-key\") pod \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.029727 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r28w\" (UniqueName: \"kubernetes.io/projected/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-kube-api-access-6r28w\") pod \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\" (UID: \"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df\") " Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.036317 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-kube-api-access-6r28w" (OuterVolumeSpecName: "kube-api-access-6r28w") pod "e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df" (UID: "e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df"). InnerVolumeSpecName "kube-api-access-6r28w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.061231 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df" (UID: "e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.061641 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-inventory" (OuterVolumeSpecName: "inventory") pod "e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df" (UID: "e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.132658 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r28w\" (UniqueName: \"kubernetes.io/projected/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-kube-api-access-6r28w\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.132696 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.132707 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.429031 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-qln66" event={"ID":"e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df","Type":"ContainerDied","Data":"cecf84a6b2c4c9af489496fa52378030151bd3636b13242f0eb6fff65cc99d2a"} Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.429089 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cecf84a6b2c4c9af489496fa52378030151bd3636b13242f0eb6fff65cc99d2a" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.430003 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-qln66" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.543830 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-rvbrk"] Oct 14 09:25:31 crc kubenswrapper[5058]: E1014 09:25:31.544502 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df" containerName="download-cache-openstack-openstack-cell1" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.544521 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df" containerName="download-cache-openstack-openstack-cell1" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.544895 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df" containerName="download-cache-openstack-openstack-cell1" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.546062 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.555442 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-rvbrk"] Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.562054 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.562244 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.640982 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-inventory\") pod \"configure-network-openstack-openstack-cell1-rvbrk\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.641144 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-ssh-key\") pod \"configure-network-openstack-openstack-cell1-rvbrk\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.641248 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6wdh\" (UniqueName: \"kubernetes.io/projected/9a063aae-0e40-4f4c-9f59-08ff5c699706-kube-api-access-x6wdh\") pod \"configure-network-openstack-openstack-cell1-rvbrk\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.742865 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-ssh-key\") pod \"configure-network-openstack-openstack-cell1-rvbrk\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.742983 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6wdh\" (UniqueName: \"kubernetes.io/projected/9a063aae-0e40-4f4c-9f59-08ff5c699706-kube-api-access-x6wdh\") pod \"configure-network-openstack-openstack-cell1-rvbrk\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.743045 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-inventory\") pod \"configure-network-openstack-openstack-cell1-rvbrk\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.748020 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-ssh-key\") pod \"configure-network-openstack-openstack-cell1-rvbrk\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.754520 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-inventory\") pod \"configure-network-openstack-openstack-cell1-rvbrk\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.761501 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6wdh\" (UniqueName: \"kubernetes.io/projected/9a063aae-0e40-4f4c-9f59-08ff5c699706-kube-api-access-x6wdh\") pod \"configure-network-openstack-openstack-cell1-rvbrk\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:31 crc kubenswrapper[5058]: I1014 09:25:31.870670 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.476760 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-rvbrk"] Oct 14 09:25:32 crc kubenswrapper[5058]: W1014 09:25:32.490158 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a063aae_0e40_4f4c_9f59_08ff5c699706.slice/crio-2fc9bdccd273944479e82c079cd63b9302ce956126119cac3a30b3a7dbfbdce3 WatchSource:0}: Error finding container 2fc9bdccd273944479e82c079cd63b9302ce956126119cac3a30b3a7dbfbdce3: Status 404 returned error can't find the container with id 2fc9bdccd273944479e82c079cd63b9302ce956126119cac3a30b3a7dbfbdce3 Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.828734 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hxlck"] Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.836918 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.865064 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxlck"] Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.865199 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-utilities\") pod \"community-operators-hxlck\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.865319 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-catalog-content\") pod \"community-operators-hxlck\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.865424 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-996cq\" (UniqueName: \"kubernetes.io/projected/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-kube-api-access-996cq\") pod \"community-operators-hxlck\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.968533 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-utilities\") pod \"community-operators-hxlck\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.968631 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-catalog-content\") pod \"community-operators-hxlck\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.968716 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-996cq\" (UniqueName: \"kubernetes.io/projected/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-kube-api-access-996cq\") pod \"community-operators-hxlck\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.969671 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-utilities\") pod \"community-operators-hxlck\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.970003 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-catalog-content\") pod \"community-operators-hxlck\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:32 crc kubenswrapper[5058]: I1014 09:25:32.989964 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-996cq\" (UniqueName: \"kubernetes.io/projected/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-kube-api-access-996cq\") pod \"community-operators-hxlck\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:33 crc kubenswrapper[5058]: I1014 09:25:33.183560 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:33 crc kubenswrapper[5058]: I1014 09:25:33.456461 5058 generic.go:334] "Generic (PLEG): container finished" podID="fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7" containerID="6ad63e2033e427b5ba9a2da3fc4afb092a36e36963d40a1bbbd2731c971b6395" exitCode=0 Oct 14 09:25:33 crc kubenswrapper[5058]: I1014 09:25:33.456840 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" event={"ID":"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7","Type":"ContainerDied","Data":"6ad63e2033e427b5ba9a2da3fc4afb092a36e36963d40a1bbbd2731c971b6395"} Oct 14 09:25:33 crc kubenswrapper[5058]: I1014 09:25:33.459026 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" event={"ID":"9a063aae-0e40-4f4c-9f59-08ff5c699706","Type":"ContainerStarted","Data":"2fc9bdccd273944479e82c079cd63b9302ce956126119cac3a30b3a7dbfbdce3"} Oct 14 09:25:33 crc kubenswrapper[5058]: W1014 09:25:33.704051 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bcdad85_f38f_4ab2_94a9_6cfdb66df4a0.slice/crio-e0dc2594c023a07a05d4860c775c7f1299e276bb873c7cdad7d093d5f33f7a35 WatchSource:0}: Error finding container e0dc2594c023a07a05d4860c775c7f1299e276bb873c7cdad7d093d5f33f7a35: Status 404 returned error can't find the container with id e0dc2594c023a07a05d4860c775c7f1299e276bb873c7cdad7d093d5f33f7a35 Oct 14 09:25:33 crc kubenswrapper[5058]: I1014 09:25:33.706397 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxlck"] Oct 14 09:25:33 crc kubenswrapper[5058]: I1014 09:25:33.801340 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:33 crc kubenswrapper[5058]: I1014 09:25:33.801400 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:33 crc kubenswrapper[5058]: I1014 09:25:33.858139 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:34 crc kubenswrapper[5058]: I1014 09:25:34.470142 5058 generic.go:334] "Generic (PLEG): container finished" podID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerID="76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6" exitCode=0 Oct 14 09:25:34 crc kubenswrapper[5058]: I1014 09:25:34.470245 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxlck" event={"ID":"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0","Type":"ContainerDied","Data":"76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6"} Oct 14 09:25:34 crc kubenswrapper[5058]: I1014 09:25:34.470270 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxlck" event={"ID":"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0","Type":"ContainerStarted","Data":"e0dc2594c023a07a05d4860c775c7f1299e276bb873c7cdad7d093d5f33f7a35"} Oct 14 09:25:34 crc kubenswrapper[5058]: I1014 09:25:34.471902 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" event={"ID":"9a063aae-0e40-4f4c-9f59-08ff5c699706","Type":"ContainerStarted","Data":"79509a68411cab794c6d1881d418f84bcd35c697074e9358434df1b05c240727"} Oct 14 09:25:34 crc kubenswrapper[5058]: I1014 09:25:34.543680 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" podStartSLOduration=2.984223141 podStartE2EDuration="3.543660597s" podCreationTimestamp="2025-10-14 09:25:31 +0000 UTC" firstStartedPulling="2025-10-14 09:25:32.493703169 +0000 UTC m=+9480.404786985" lastFinishedPulling="2025-10-14 09:25:33.053140635 +0000 UTC m=+9480.964224441" observedRunningTime="2025-10-14 09:25:34.5388537 +0000 UTC m=+9482.449937516" watchObservedRunningTime="2025-10-14 09:25:34.543660597 +0000 UTC m=+9482.454744403" Oct 14 09:25:34 crc kubenswrapper[5058]: I1014 09:25:34.964907 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.017654 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvxmx\" (UniqueName: \"kubernetes.io/projected/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-kube-api-access-wvxmx\") pod \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.017774 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-inventory\") pod \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.017809 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-ssh-key\") pod \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\" (UID: \"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7\") " Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.024134 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-kube-api-access-wvxmx" (OuterVolumeSpecName: "kube-api-access-wvxmx") pod "fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7" (UID: "fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7"). InnerVolumeSpecName "kube-api-access-wvxmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.050898 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-inventory" (OuterVolumeSpecName: "inventory") pod "fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7" (UID: "fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.066237 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7" (UID: "fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.120380 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvxmx\" (UniqueName: \"kubernetes.io/projected/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-kube-api-access-wvxmx\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.120414 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.120426 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.486226 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxlck" event={"ID":"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0","Type":"ContainerStarted","Data":"e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0"} Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.489953 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.495524 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell2-q5n4f" event={"ID":"fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7","Type":"ContainerDied","Data":"40cf7c0633eec9d4c660dc87ea095317d02ad686c0e98929f6cdf501e9c77c15"} Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.495694 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40cf7c0633eec9d4c660dc87ea095317d02ad686c0e98929f6cdf501e9c77c15" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.585452 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell2-lgkqn"] Oct 14 09:25:35 crc kubenswrapper[5058]: E1014 09:25:35.586005 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7" containerName="download-cache-openstack-openstack-cell2" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.586032 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7" containerName="download-cache-openstack-openstack-cell2" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.586319 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7" containerName="download-cache-openstack-openstack-cell2" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.587237 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.590520 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.590908 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.592761 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell2-lgkqn"] Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.633438 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-inventory\") pod \"configure-network-openstack-openstack-cell2-lgkqn\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.633469 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-ssh-key\") pod \"configure-network-openstack-openstack-cell2-lgkqn\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.633657 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr6p5\" (UniqueName: \"kubernetes.io/projected/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-kube-api-access-fr6p5\") pod \"configure-network-openstack-openstack-cell2-lgkqn\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.734963 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-inventory\") pod \"configure-network-openstack-openstack-cell2-lgkqn\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.735624 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-ssh-key\") pod \"configure-network-openstack-openstack-cell2-lgkqn\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.735825 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr6p5\" (UniqueName: \"kubernetes.io/projected/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-kube-api-access-fr6p5\") pod \"configure-network-openstack-openstack-cell2-lgkqn\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.739486 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-inventory\") pod \"configure-network-openstack-openstack-cell2-lgkqn\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.741114 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-ssh-key\") pod \"configure-network-openstack-openstack-cell2-lgkqn\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.765247 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr6p5\" (UniqueName: \"kubernetes.io/projected/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-kube-api-access-fr6p5\") pod \"configure-network-openstack-openstack-cell2-lgkqn\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:35 crc kubenswrapper[5058]: I1014 09:25:35.943933 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:25:36 crc kubenswrapper[5058]: I1014 09:25:36.588647 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell2-lgkqn"] Oct 14 09:25:36 crc kubenswrapper[5058]: W1014 09:25:36.594652 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0067ccfb_d0f9_4b48_bcc6_73f70a2e206b.slice/crio-7f35ba6eb2aa9ef19ffefb8ba14ff0c94aceefcd76378a4270b6a3158219b9e5 WatchSource:0}: Error finding container 7f35ba6eb2aa9ef19ffefb8ba14ff0c94aceefcd76378a4270b6a3158219b9e5: Status 404 returned error can't find the container with id 7f35ba6eb2aa9ef19ffefb8ba14ff0c94aceefcd76378a4270b6a3158219b9e5 Oct 14 09:25:37 crc kubenswrapper[5058]: I1014 09:25:37.517097 5058 generic.go:334] "Generic (PLEG): container finished" podID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerID="e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0" exitCode=0 Oct 14 09:25:37 crc kubenswrapper[5058]: I1014 09:25:37.517194 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxlck" event={"ID":"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0","Type":"ContainerDied","Data":"e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0"} Oct 14 09:25:37 crc kubenswrapper[5058]: I1014 09:25:37.521340 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" event={"ID":"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b","Type":"ContainerStarted","Data":"ba4cd1f5ae6c26b037f0130ebbafc57c2fa28aa7ef0e75c63eb69b09a1b7b83d"} Oct 14 09:25:37 crc kubenswrapper[5058]: I1014 09:25:37.521391 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" event={"ID":"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b","Type":"ContainerStarted","Data":"7f35ba6eb2aa9ef19ffefb8ba14ff0c94aceefcd76378a4270b6a3158219b9e5"} Oct 14 09:25:37 crc kubenswrapper[5058]: I1014 09:25:37.582755 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" podStartSLOduration=1.969812838 podStartE2EDuration="2.582728377s" podCreationTimestamp="2025-10-14 09:25:35 +0000 UTC" firstStartedPulling="2025-10-14 09:25:36.597343472 +0000 UTC m=+9484.508427318" lastFinishedPulling="2025-10-14 09:25:37.210259021 +0000 UTC m=+9485.121342857" observedRunningTime="2025-10-14 09:25:37.572828005 +0000 UTC m=+9485.483911811" watchObservedRunningTime="2025-10-14 09:25:37.582728377 +0000 UTC m=+9485.493812223" Oct 14 09:25:37 crc kubenswrapper[5058]: I1014 09:25:37.789905 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:25:37 crc kubenswrapper[5058]: E1014 09:25:37.790498 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:25:39 crc kubenswrapper[5058]: I1014 09:25:39.552646 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxlck" event={"ID":"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0","Type":"ContainerStarted","Data":"f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2"} Oct 14 09:25:39 crc kubenswrapper[5058]: I1014 09:25:39.586212 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hxlck" podStartSLOduration=3.7894815360000003 podStartE2EDuration="7.586194841s" podCreationTimestamp="2025-10-14 09:25:32 +0000 UTC" firstStartedPulling="2025-10-14 09:25:34.472164769 +0000 UTC m=+9482.383248575" lastFinishedPulling="2025-10-14 09:25:38.268878074 +0000 UTC m=+9486.179961880" observedRunningTime="2025-10-14 09:25:39.578240984 +0000 UTC m=+9487.489324790" watchObservedRunningTime="2025-10-14 09:25:39.586194841 +0000 UTC m=+9487.497278647" Oct 14 09:25:43 crc kubenswrapper[5058]: I1014 09:25:43.184422 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:43 crc kubenswrapper[5058]: I1014 09:25:43.184852 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:43 crc kubenswrapper[5058]: I1014 09:25:43.276747 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:43 crc kubenswrapper[5058]: I1014 09:25:43.691637 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:43 crc kubenswrapper[5058]: I1014 09:25:43.770395 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxlck"] Oct 14 09:25:43 crc kubenswrapper[5058]: I1014 09:25:43.889872 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:45 crc kubenswrapper[5058]: I1014 09:25:45.644423 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hxlck" podUID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerName="registry-server" containerID="cri-o://f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2" gracePeriod=2 Oct 14 09:25:45 crc kubenswrapper[5058]: I1014 09:25:45.923934 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbrr8"] Oct 14 09:25:45 crc kubenswrapper[5058]: I1014 09:25:45.924690 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gbrr8" podUID="9318b746-5455-46d0-a2bb-b717bf93735f" containerName="registry-server" containerID="cri-o://9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d" gracePeriod=2 Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.242584 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.300435 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-catalog-content\") pod \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.300872 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-996cq\" (UniqueName: \"kubernetes.io/projected/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-kube-api-access-996cq\") pod \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.300952 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-utilities\") pod \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\" (UID: \"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0\") " Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.303093 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-utilities" (OuterVolumeSpecName: "utilities") pod "5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" (UID: "5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.309573 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-kube-api-access-996cq" (OuterVolumeSpecName: "kube-api-access-996cq") pod "5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" (UID: "5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0"). InnerVolumeSpecName "kube-api-access-996cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.343806 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" (UID: "5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.392190 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.402912 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-utilities\") pod \"9318b746-5455-46d0-a2bb-b717bf93735f\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.403051 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-catalog-content\") pod \"9318b746-5455-46d0-a2bb-b717bf93735f\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.403308 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggp7f\" (UniqueName: \"kubernetes.io/projected/9318b746-5455-46d0-a2bb-b717bf93735f-kube-api-access-ggp7f\") pod \"9318b746-5455-46d0-a2bb-b717bf93735f\" (UID: \"9318b746-5455-46d0-a2bb-b717bf93735f\") " Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.403809 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-utilities" (OuterVolumeSpecName: "utilities") pod "9318b746-5455-46d0-a2bb-b717bf93735f" (UID: "9318b746-5455-46d0-a2bb-b717bf93735f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.406336 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9318b746-5455-46d0-a2bb-b717bf93735f-kube-api-access-ggp7f" (OuterVolumeSpecName: "kube-api-access-ggp7f") pod "9318b746-5455-46d0-a2bb-b717bf93735f" (UID: "9318b746-5455-46d0-a2bb-b717bf93735f"). InnerVolumeSpecName "kube-api-access-ggp7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.412450 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-996cq\" (UniqueName: \"kubernetes.io/projected/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-kube-api-access-996cq\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.412498 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.412523 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.412543 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.412560 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggp7f\" (UniqueName: \"kubernetes.io/projected/9318b746-5455-46d0-a2bb-b717bf93735f-kube-api-access-ggp7f\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.455406 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9318b746-5455-46d0-a2bb-b717bf93735f" (UID: "9318b746-5455-46d0-a2bb-b717bf93735f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.515129 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9318b746-5455-46d0-a2bb-b717bf93735f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.656343 5058 generic.go:334] "Generic (PLEG): container finished" podID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerID="f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2" exitCode=0 Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.656405 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxlck" event={"ID":"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0","Type":"ContainerDied","Data":"f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2"} Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.656432 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxlck" event={"ID":"5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0","Type":"ContainerDied","Data":"e0dc2594c023a07a05d4860c775c7f1299e276bb873c7cdad7d093d5f33f7a35"} Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.656451 5058 scope.go:117] "RemoveContainer" containerID="f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.656528 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxlck" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.660757 5058 generic.go:334] "Generic (PLEG): container finished" podID="9318b746-5455-46d0-a2bb-b717bf93735f" containerID="9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d" exitCode=0 Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.660832 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbrr8" event={"ID":"9318b746-5455-46d0-a2bb-b717bf93735f","Type":"ContainerDied","Data":"9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d"} Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.660874 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbrr8" event={"ID":"9318b746-5455-46d0-a2bb-b717bf93735f","Type":"ContainerDied","Data":"fbd423b7318c9bf1b859a59e0e5572c3d937296df26cd1e33f43570c61df6148"} Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.660880 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbrr8" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.687438 5058 scope.go:117] "RemoveContainer" containerID="e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.712011 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxlck"] Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.725021 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hxlck"] Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.731660 5058 scope.go:117] "RemoveContainer" containerID="76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.733704 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbrr8"] Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.741209 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gbrr8"] Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.753711 5058 scope.go:117] "RemoveContainer" containerID="f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2" Oct 14 09:25:46 crc kubenswrapper[5058]: E1014 09:25:46.754185 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2\": container with ID starting with f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2 not found: ID does not exist" containerID="f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.754253 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2"} err="failed to get container status \"f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2\": rpc error: code = NotFound desc = could not find container \"f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2\": container with ID starting with f8fd4b77d36c126b7d7462a7134f99708ecf68bb2eede5ea242faeb281dce3e2 not found: ID does not exist" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.754309 5058 scope.go:117] "RemoveContainer" containerID="e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0" Oct 14 09:25:46 crc kubenswrapper[5058]: E1014 09:25:46.754527 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0\": container with ID starting with e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0 not found: ID does not exist" containerID="e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.754552 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0"} err="failed to get container status \"e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0\": rpc error: code = NotFound desc = could not find container \"e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0\": container with ID starting with e47e12128697ceaa62d90fb6425972ea946e0a7ba3528ce50e983b052f1276c0 not found: ID does not exist" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.754565 5058 scope.go:117] "RemoveContainer" containerID="76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6" Oct 14 09:25:46 crc kubenswrapper[5058]: E1014 09:25:46.754939 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6\": container with ID starting with 76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6 not found: ID does not exist" containerID="76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.754985 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6"} err="failed to get container status \"76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6\": rpc error: code = NotFound desc = could not find container \"76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6\": container with ID starting with 76b9369bae4f7c9d4f5e32111d6b1c2f7aafc0f47f79e5dbc5deba6c5fe6daf6 not found: ID does not exist" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.755013 5058 scope.go:117] "RemoveContainer" containerID="9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.817027 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" path="/var/lib/kubelet/pods/5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0/volumes" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.818279 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9318b746-5455-46d0-a2bb-b717bf93735f" path="/var/lib/kubelet/pods/9318b746-5455-46d0-a2bb-b717bf93735f/volumes" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.851466 5058 scope.go:117] "RemoveContainer" containerID="934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.881499 5058 scope.go:117] "RemoveContainer" containerID="39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.931537 5058 scope.go:117] "RemoveContainer" containerID="9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d" Oct 14 09:25:46 crc kubenswrapper[5058]: E1014 09:25:46.932214 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d\": container with ID starting with 9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d not found: ID does not exist" containerID="9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.932267 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d"} err="failed to get container status \"9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d\": rpc error: code = NotFound desc = could not find container \"9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d\": container with ID starting with 9427ebee92be3fe0fec526e8f486a50c9c4fdcc0437c64ee9776df007b502f5d not found: ID does not exist" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.932304 5058 scope.go:117] "RemoveContainer" containerID="934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2" Oct 14 09:25:46 crc kubenswrapper[5058]: E1014 09:25:46.932866 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2\": container with ID starting with 934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2 not found: ID does not exist" containerID="934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.932900 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2"} err="failed to get container status \"934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2\": rpc error: code = NotFound desc = could not find container \"934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2\": container with ID starting with 934e8685f633b10efe992501fb026018bf5590147b2eef4cb762fdcc87d587a2 not found: ID does not exist" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.932920 5058 scope.go:117] "RemoveContainer" containerID="39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae" Oct 14 09:25:46 crc kubenswrapper[5058]: E1014 09:25:46.933351 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae\": container with ID starting with 39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae not found: ID does not exist" containerID="39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae" Oct 14 09:25:46 crc kubenswrapper[5058]: I1014 09:25:46.933390 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae"} err="failed to get container status \"39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae\": rpc error: code = NotFound desc = could not find container \"39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae\": container with ID starting with 39f69b673fc05e648db8d736721228a87c584d58a0144ded82585bd73ee208ae not found: ID does not exist" Oct 14 09:25:49 crc kubenswrapper[5058]: I1014 09:25:49.800113 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:25:49 crc kubenswrapper[5058]: E1014 09:25:49.801201 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:26:02 crc kubenswrapper[5058]: I1014 09:26:02.802513 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:26:02 crc kubenswrapper[5058]: E1014 09:26:02.803772 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:26:15 crc kubenswrapper[5058]: I1014 09:26:15.790562 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:26:15 crc kubenswrapper[5058]: E1014 09:26:15.791781 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:26:26 crc kubenswrapper[5058]: I1014 09:26:26.791152 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:26:26 crc kubenswrapper[5058]: E1014 09:26:26.791982 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:26:32 crc kubenswrapper[5058]: I1014 09:26:32.211537 5058 generic.go:334] "Generic (PLEG): container finished" podID="9a063aae-0e40-4f4c-9f59-08ff5c699706" containerID="79509a68411cab794c6d1881d418f84bcd35c697074e9358434df1b05c240727" exitCode=0 Oct 14 09:26:32 crc kubenswrapper[5058]: I1014 09:26:32.212185 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" event={"ID":"9a063aae-0e40-4f4c-9f59-08ff5c699706","Type":"ContainerDied","Data":"79509a68411cab794c6d1881d418f84bcd35c697074e9358434df1b05c240727"} Oct 14 09:26:33 crc kubenswrapper[5058]: I1014 09:26:33.845708 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:26:33 crc kubenswrapper[5058]: I1014 09:26:33.996267 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-inventory\") pod \"9a063aae-0e40-4f4c-9f59-08ff5c699706\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " Oct 14 09:26:33 crc kubenswrapper[5058]: I1014 09:26:33.996339 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6wdh\" (UniqueName: \"kubernetes.io/projected/9a063aae-0e40-4f4c-9f59-08ff5c699706-kube-api-access-x6wdh\") pod \"9a063aae-0e40-4f4c-9f59-08ff5c699706\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " Oct 14 09:26:33 crc kubenswrapper[5058]: I1014 09:26:33.996505 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-ssh-key\") pod \"9a063aae-0e40-4f4c-9f59-08ff5c699706\" (UID: \"9a063aae-0e40-4f4c-9f59-08ff5c699706\") " Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.266460 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" event={"ID":"9a063aae-0e40-4f4c-9f59-08ff5c699706","Type":"ContainerDied","Data":"2fc9bdccd273944479e82c079cd63b9302ce956126119cac3a30b3a7dbfbdce3"} Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.266879 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fc9bdccd273944479e82c079cd63b9302ce956126119cac3a30b3a7dbfbdce3" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.266983 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-rvbrk" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.333322 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wftx7"] Oct 14 09:26:34 crc kubenswrapper[5058]: E1014 09:26:34.333878 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerName="extract-content" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.333897 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerName="extract-content" Oct 14 09:26:34 crc kubenswrapper[5058]: E1014 09:26:34.333924 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerName="registry-server" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.333932 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerName="registry-server" Oct 14 09:26:34 crc kubenswrapper[5058]: E1014 09:26:34.333949 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9318b746-5455-46d0-a2bb-b717bf93735f" containerName="extract-content" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.333957 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9318b746-5455-46d0-a2bb-b717bf93735f" containerName="extract-content" Oct 14 09:26:34 crc kubenswrapper[5058]: E1014 09:26:34.333981 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9318b746-5455-46d0-a2bb-b717bf93735f" containerName="extract-utilities" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.333989 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9318b746-5455-46d0-a2bb-b717bf93735f" containerName="extract-utilities" Oct 14 09:26:34 crc kubenswrapper[5058]: E1014 09:26:34.334002 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerName="extract-utilities" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.334010 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerName="extract-utilities" Oct 14 09:26:34 crc kubenswrapper[5058]: E1014 09:26:34.334020 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9318b746-5455-46d0-a2bb-b717bf93735f" containerName="registry-server" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.334027 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9318b746-5455-46d0-a2bb-b717bf93735f" containerName="registry-server" Oct 14 09:26:34 crc kubenswrapper[5058]: E1014 09:26:34.334041 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a063aae-0e40-4f4c-9f59-08ff5c699706" containerName="configure-network-openstack-openstack-cell1" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.334050 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a063aae-0e40-4f4c-9f59-08ff5c699706" containerName="configure-network-openstack-openstack-cell1" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.334349 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a063aae-0e40-4f4c-9f59-08ff5c699706" containerName="configure-network-openstack-openstack-cell1" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.334384 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9318b746-5455-46d0-a2bb-b717bf93735f" containerName="registry-server" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.334401 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bcdad85-f38f-4ab2-94a9-6cfdb66df4a0" containerName="registry-server" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.335284 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.345005 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wftx7"] Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.492866 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a063aae-0e40-4f4c-9f59-08ff5c699706-kube-api-access-x6wdh" (OuterVolumeSpecName: "kube-api-access-x6wdh") pod "9a063aae-0e40-4f4c-9f59-08ff5c699706" (UID: "9a063aae-0e40-4f4c-9f59-08ff5c699706"). InnerVolumeSpecName "kube-api-access-x6wdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.509173 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7m6\" (UniqueName: \"kubernetes.io/projected/3b81f53f-5f91-41a5-b84e-8de887356308-kube-api-access-bh7m6\") pod \"validate-network-openstack-openstack-cell1-wftx7\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.509257 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-inventory\") pod \"validate-network-openstack-openstack-cell1-wftx7\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.509638 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-ssh-key\") pod \"validate-network-openstack-openstack-cell1-wftx7\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.510120 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6wdh\" (UniqueName: \"kubernetes.io/projected/9a063aae-0e40-4f4c-9f59-08ff5c699706-kube-api-access-x6wdh\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.612109 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh7m6\" (UniqueName: \"kubernetes.io/projected/3b81f53f-5f91-41a5-b84e-8de887356308-kube-api-access-bh7m6\") pod \"validate-network-openstack-openstack-cell1-wftx7\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.612225 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-inventory\") pod \"validate-network-openstack-openstack-cell1-wftx7\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.612358 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-ssh-key\") pod \"validate-network-openstack-openstack-cell1-wftx7\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.620563 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-inventory\") pod \"validate-network-openstack-openstack-cell1-wftx7\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.626904 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-ssh-key\") pod \"validate-network-openstack-openstack-cell1-wftx7\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.641445 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-inventory" (OuterVolumeSpecName: "inventory") pod "9a063aae-0e40-4f4c-9f59-08ff5c699706" (UID: "9a063aae-0e40-4f4c-9f59-08ff5c699706"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.644964 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh7m6\" (UniqueName: \"kubernetes.io/projected/3b81f53f-5f91-41a5-b84e-8de887356308-kube-api-access-bh7m6\") pod \"validate-network-openstack-openstack-cell1-wftx7\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.651288 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9a063aae-0e40-4f4c-9f59-08ff5c699706" (UID: "9a063aae-0e40-4f4c-9f59-08ff5c699706"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.665891 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.714298 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:34 crc kubenswrapper[5058]: I1014 09:26:34.714507 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a063aae-0e40-4f4c-9f59-08ff5c699706-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:35 crc kubenswrapper[5058]: I1014 09:26:35.289624 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wftx7"] Oct 14 09:26:35 crc kubenswrapper[5058]: I1014 09:26:35.309394 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:26:36 crc kubenswrapper[5058]: I1014 09:26:36.288489 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wftx7" event={"ID":"3b81f53f-5f91-41a5-b84e-8de887356308","Type":"ContainerStarted","Data":"bc2b4caa200f91b602794862a59e63ef099cefb54238e11b2a98bdead490f44f"} Oct 14 09:26:36 crc kubenswrapper[5058]: I1014 09:26:36.288838 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wftx7" event={"ID":"3b81f53f-5f91-41a5-b84e-8de887356308","Type":"ContainerStarted","Data":"9b45e02a0213dd9db0e5b2933c2e278897f59dd55e780c367c630691c32acebf"} Oct 14 09:26:36 crc kubenswrapper[5058]: I1014 09:26:36.291509 5058 generic.go:334] "Generic (PLEG): container finished" podID="0067ccfb-d0f9-4b48-bcc6-73f70a2e206b" containerID="ba4cd1f5ae6c26b037f0130ebbafc57c2fa28aa7ef0e75c63eb69b09a1b7b83d" exitCode=0 Oct 14 09:26:36 crc kubenswrapper[5058]: I1014 09:26:36.291535 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" event={"ID":"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b","Type":"ContainerDied","Data":"ba4cd1f5ae6c26b037f0130ebbafc57c2fa28aa7ef0e75c63eb69b09a1b7b83d"} Oct 14 09:26:36 crc kubenswrapper[5058]: I1014 09:26:36.323402 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-wftx7" podStartSLOduration=1.8901478649999999 podStartE2EDuration="2.323381363s" podCreationTimestamp="2025-10-14 09:26:34 +0000 UTC" firstStartedPulling="2025-10-14 09:26:35.30895694 +0000 UTC m=+9543.220040786" lastFinishedPulling="2025-10-14 09:26:35.742190478 +0000 UTC m=+9543.653274284" observedRunningTime="2025-10-14 09:26:36.317435124 +0000 UTC m=+9544.228518950" watchObservedRunningTime="2025-10-14 09:26:36.323381363 +0000 UTC m=+9544.234465169" Oct 14 09:26:37 crc kubenswrapper[5058]: I1014 09:26:37.848114 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:26:37 crc kubenswrapper[5058]: I1014 09:26:37.994505 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr6p5\" (UniqueName: \"kubernetes.io/projected/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-kube-api-access-fr6p5\") pod \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " Oct 14 09:26:37 crc kubenswrapper[5058]: I1014 09:26:37.994592 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-ssh-key\") pod \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " Oct 14 09:26:37 crc kubenswrapper[5058]: I1014 09:26:37.994752 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-inventory\") pod \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\" (UID: \"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b\") " Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.002539 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-kube-api-access-fr6p5" (OuterVolumeSpecName: "kube-api-access-fr6p5") pod "0067ccfb-d0f9-4b48-bcc6-73f70a2e206b" (UID: "0067ccfb-d0f9-4b48-bcc6-73f70a2e206b"). InnerVolumeSpecName "kube-api-access-fr6p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.035988 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0067ccfb-d0f9-4b48-bcc6-73f70a2e206b" (UID: "0067ccfb-d0f9-4b48-bcc6-73f70a2e206b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.044670 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-inventory" (OuterVolumeSpecName: "inventory") pod "0067ccfb-d0f9-4b48-bcc6-73f70a2e206b" (UID: "0067ccfb-d0f9-4b48-bcc6-73f70a2e206b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.098169 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr6p5\" (UniqueName: \"kubernetes.io/projected/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-kube-api-access-fr6p5\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.098217 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.098236 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0067ccfb-d0f9-4b48-bcc6-73f70a2e206b-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.314341 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" event={"ID":"0067ccfb-d0f9-4b48-bcc6-73f70a2e206b","Type":"ContainerDied","Data":"7f35ba6eb2aa9ef19ffefb8ba14ff0c94aceefcd76378a4270b6a3158219b9e5"} Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.314656 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f35ba6eb2aa9ef19ffefb8ba14ff0c94aceefcd76378a4270b6a3158219b9e5" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.314381 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell2-lgkqn" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.470379 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell2-bfkb2"] Oct 14 09:26:38 crc kubenswrapper[5058]: E1014 09:26:38.485443 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0067ccfb-d0f9-4b48-bcc6-73f70a2e206b" containerName="configure-network-openstack-openstack-cell2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.485498 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="0067ccfb-d0f9-4b48-bcc6-73f70a2e206b" containerName="configure-network-openstack-openstack-cell2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.486461 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="0067ccfb-d0f9-4b48-bcc6-73f70a2e206b" containerName="configure-network-openstack-openstack-cell2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.491654 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.494776 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.495147 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.521680 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell2-bfkb2"] Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.616798 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mgvv\" (UniqueName: \"kubernetes.io/projected/1dc66c44-6583-4165-9d04-012b1fed763a-kube-api-access-9mgvv\") pod \"validate-network-openstack-openstack-cell2-bfkb2\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.617144 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-inventory\") pod \"validate-network-openstack-openstack-cell2-bfkb2\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.617435 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-ssh-key\") pod \"validate-network-openstack-openstack-cell2-bfkb2\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.719445 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-ssh-key\") pod \"validate-network-openstack-openstack-cell2-bfkb2\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.719570 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mgvv\" (UniqueName: \"kubernetes.io/projected/1dc66c44-6583-4165-9d04-012b1fed763a-kube-api-access-9mgvv\") pod \"validate-network-openstack-openstack-cell2-bfkb2\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.719683 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-inventory\") pod \"validate-network-openstack-openstack-cell2-bfkb2\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.724499 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-inventory\") pod \"validate-network-openstack-openstack-cell2-bfkb2\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.724620 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-ssh-key\") pod \"validate-network-openstack-openstack-cell2-bfkb2\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.744720 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mgvv\" (UniqueName: \"kubernetes.io/projected/1dc66c44-6583-4165-9d04-012b1fed763a-kube-api-access-9mgvv\") pod \"validate-network-openstack-openstack-cell2-bfkb2\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:38 crc kubenswrapper[5058]: I1014 09:26:38.817695 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:39 crc kubenswrapper[5058]: I1014 09:26:39.430893 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell2-bfkb2"] Oct 14 09:26:40 crc kubenswrapper[5058]: I1014 09:26:40.343255 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" event={"ID":"1dc66c44-6583-4165-9d04-012b1fed763a","Type":"ContainerStarted","Data":"bf90a8fb41cb4dbec5b80afebcfa5c6ec35375eff7f3fc1543265cb9f5c21d37"} Oct 14 09:26:40 crc kubenswrapper[5058]: I1014 09:26:40.343638 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" event={"ID":"1dc66c44-6583-4165-9d04-012b1fed763a","Type":"ContainerStarted","Data":"2131681a6edd2816b7d54f1052501a50eaa5f1a698d37959eb1363e21c7fa6ce"} Oct 14 09:26:40 crc kubenswrapper[5058]: I1014 09:26:40.377750 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" podStartSLOduration=1.766275773 podStartE2EDuration="2.377716371s" podCreationTimestamp="2025-10-14 09:26:38 +0000 UTC" firstStartedPulling="2025-10-14 09:26:39.437950505 +0000 UTC m=+9547.349034321" lastFinishedPulling="2025-10-14 09:26:40.049391073 +0000 UTC m=+9547.960474919" observedRunningTime="2025-10-14 09:26:40.365932195 +0000 UTC m=+9548.277016011" watchObservedRunningTime="2025-10-14 09:26:40.377716371 +0000 UTC m=+9548.288800217" Oct 14 09:26:41 crc kubenswrapper[5058]: I1014 09:26:41.790954 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:26:42 crc kubenswrapper[5058]: I1014 09:26:42.384361 5058 generic.go:334] "Generic (PLEG): container finished" podID="3b81f53f-5f91-41a5-b84e-8de887356308" containerID="bc2b4caa200f91b602794862a59e63ef099cefb54238e11b2a98bdead490f44f" exitCode=0 Oct 14 09:26:42 crc kubenswrapper[5058]: I1014 09:26:42.384458 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wftx7" event={"ID":"3b81f53f-5f91-41a5-b84e-8de887356308","Type":"ContainerDied","Data":"bc2b4caa200f91b602794862a59e63ef099cefb54238e11b2a98bdead490f44f"} Oct 14 09:26:42 crc kubenswrapper[5058]: I1014 09:26:42.390164 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"5c8b0488d0741775b8aef21ce61846f31d6ae906b86145654734d11d1b8a2ec4"} Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.733657 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.858705 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-inventory\") pod \"3b81f53f-5f91-41a5-b84e-8de887356308\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.858910 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-ssh-key\") pod \"3b81f53f-5f91-41a5-b84e-8de887356308\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.859012 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh7m6\" (UniqueName: \"kubernetes.io/projected/3b81f53f-5f91-41a5-b84e-8de887356308-kube-api-access-bh7m6\") pod \"3b81f53f-5f91-41a5-b84e-8de887356308\" (UID: \"3b81f53f-5f91-41a5-b84e-8de887356308\") " Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.872176 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b81f53f-5f91-41a5-b84e-8de887356308-kube-api-access-bh7m6" (OuterVolumeSpecName: "kube-api-access-bh7m6") pod "3b81f53f-5f91-41a5-b84e-8de887356308" (UID: "3b81f53f-5f91-41a5-b84e-8de887356308"). InnerVolumeSpecName "kube-api-access-bh7m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.907950 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-inventory" (OuterVolumeSpecName: "inventory") pod "3b81f53f-5f91-41a5-b84e-8de887356308" (UID: "3b81f53f-5f91-41a5-b84e-8de887356308"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.914443 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3b81f53f-5f91-41a5-b84e-8de887356308" (UID: "3b81f53f-5f91-41a5-b84e-8de887356308"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.966276 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.966315 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh7m6\" (UniqueName: \"kubernetes.io/projected/3b81f53f-5f91-41a5-b84e-8de887356308-kube-api-access-bh7m6\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:44 crc kubenswrapper[5058]: I1014 09:26:44.966324 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b81f53f-5f91-41a5-b84e-8de887356308-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.427137 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wftx7" event={"ID":"3b81f53f-5f91-41a5-b84e-8de887356308","Type":"ContainerDied","Data":"9b45e02a0213dd9db0e5b2933c2e278897f59dd55e780c367c630691c32acebf"} Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.427549 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b45e02a0213dd9db0e5b2933c2e278897f59dd55e780c367c630691c32acebf" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.427222 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wftx7" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.844927 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-sthwr"] Oct 14 09:26:45 crc kubenswrapper[5058]: E1014 09:26:45.845777 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b81f53f-5f91-41a5-b84e-8de887356308" containerName="validate-network-openstack-openstack-cell1" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.845802 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b81f53f-5f91-41a5-b84e-8de887356308" containerName="validate-network-openstack-openstack-cell1" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.846094 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b81f53f-5f91-41a5-b84e-8de887356308" containerName="validate-network-openstack-openstack-cell1" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.846973 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.849388 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.851082 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.860007 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-sthwr"] Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.992589 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-ssh-key\") pod \"install-os-openstack-openstack-cell1-sthwr\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.992900 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-inventory\") pod \"install-os-openstack-openstack-cell1-sthwr\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:45 crc kubenswrapper[5058]: I1014 09:26:45.992969 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srqwn\" (UniqueName: \"kubernetes.io/projected/25f8c470-b427-4e78-a4b3-80e6fe632075-kube-api-access-srqwn\") pod \"install-os-openstack-openstack-cell1-sthwr\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:46 crc kubenswrapper[5058]: I1014 09:26:46.095244 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-ssh-key\") pod \"install-os-openstack-openstack-cell1-sthwr\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:46 crc kubenswrapper[5058]: I1014 09:26:46.095377 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-inventory\") pod \"install-os-openstack-openstack-cell1-sthwr\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:46 crc kubenswrapper[5058]: I1014 09:26:46.095442 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srqwn\" (UniqueName: \"kubernetes.io/projected/25f8c470-b427-4e78-a4b3-80e6fe632075-kube-api-access-srqwn\") pod \"install-os-openstack-openstack-cell1-sthwr\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:46 crc kubenswrapper[5058]: I1014 09:26:46.697398 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-inventory\") pod \"install-os-openstack-openstack-cell1-sthwr\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:46 crc kubenswrapper[5058]: I1014 09:26:46.699178 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srqwn\" (UniqueName: \"kubernetes.io/projected/25f8c470-b427-4e78-a4b3-80e6fe632075-kube-api-access-srqwn\") pod \"install-os-openstack-openstack-cell1-sthwr\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:46 crc kubenswrapper[5058]: I1014 09:26:46.699978 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-ssh-key\") pod \"install-os-openstack-openstack-cell1-sthwr\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:46 crc kubenswrapper[5058]: I1014 09:26:46.802508 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:26:47 crc kubenswrapper[5058]: I1014 09:26:47.452756 5058 generic.go:334] "Generic (PLEG): container finished" podID="1dc66c44-6583-4165-9d04-012b1fed763a" containerID="bf90a8fb41cb4dbec5b80afebcfa5c6ec35375eff7f3fc1543265cb9f5c21d37" exitCode=0 Oct 14 09:26:47 crc kubenswrapper[5058]: I1014 09:26:47.452843 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" event={"ID":"1dc66c44-6583-4165-9d04-012b1fed763a","Type":"ContainerDied","Data":"bf90a8fb41cb4dbec5b80afebcfa5c6ec35375eff7f3fc1543265cb9f5c21d37"} Oct 14 09:26:47 crc kubenswrapper[5058]: I1014 09:26:47.476717 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-sthwr"] Oct 14 09:26:48 crc kubenswrapper[5058]: I1014 09:26:48.470499 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-sthwr" event={"ID":"25f8c470-b427-4e78-a4b3-80e6fe632075","Type":"ContainerStarted","Data":"56761cce620efad937471ddfed0568e70745590c84ec671f84e749edfdb829b9"} Oct 14 09:26:48 crc kubenswrapper[5058]: I1014 09:26:48.472056 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-sthwr" event={"ID":"25f8c470-b427-4e78-a4b3-80e6fe632075","Type":"ContainerStarted","Data":"84eb647bffc8ed4080450b8dec7f4360ffc4932493bea2362f471ce76e78a5f2"} Oct 14 09:26:48 crc kubenswrapper[5058]: I1014 09:26:48.519195 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-sthwr" podStartSLOduration=2.955860413 podStartE2EDuration="3.519176739s" podCreationTimestamp="2025-10-14 09:26:45 +0000 UTC" firstStartedPulling="2025-10-14 09:26:47.490408898 +0000 UTC m=+9555.401492714" lastFinishedPulling="2025-10-14 09:26:48.053725234 +0000 UTC m=+9555.964809040" observedRunningTime="2025-10-14 09:26:48.514751903 +0000 UTC m=+9556.425835719" watchObservedRunningTime="2025-10-14 09:26:48.519176739 +0000 UTC m=+9556.430260545" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.011162 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.079270 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mgvv\" (UniqueName: \"kubernetes.io/projected/1dc66c44-6583-4165-9d04-012b1fed763a-kube-api-access-9mgvv\") pod \"1dc66c44-6583-4165-9d04-012b1fed763a\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.079400 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-inventory\") pod \"1dc66c44-6583-4165-9d04-012b1fed763a\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.079441 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-ssh-key\") pod \"1dc66c44-6583-4165-9d04-012b1fed763a\" (UID: \"1dc66c44-6583-4165-9d04-012b1fed763a\") " Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.088755 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc66c44-6583-4165-9d04-012b1fed763a-kube-api-access-9mgvv" (OuterVolumeSpecName: "kube-api-access-9mgvv") pod "1dc66c44-6583-4165-9d04-012b1fed763a" (UID: "1dc66c44-6583-4165-9d04-012b1fed763a"). InnerVolumeSpecName "kube-api-access-9mgvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.126900 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-inventory" (OuterVolumeSpecName: "inventory") pod "1dc66c44-6583-4165-9d04-012b1fed763a" (UID: "1dc66c44-6583-4165-9d04-012b1fed763a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.129365 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1dc66c44-6583-4165-9d04-012b1fed763a" (UID: "1dc66c44-6583-4165-9d04-012b1fed763a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.182111 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mgvv\" (UniqueName: \"kubernetes.io/projected/1dc66c44-6583-4165-9d04-012b1fed763a-kube-api-access-9mgvv\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.182152 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.182165 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dc66c44-6583-4165-9d04-012b1fed763a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.493434 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" event={"ID":"1dc66c44-6583-4165-9d04-012b1fed763a","Type":"ContainerDied","Data":"2131681a6edd2816b7d54f1052501a50eaa5f1a698d37959eb1363e21c7fa6ce"} Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.493500 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2131681a6edd2816b7d54f1052501a50eaa5f1a698d37959eb1363e21c7fa6ce" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.493784 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell2-bfkb2" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.593549 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell2-bfxmn"] Oct 14 09:26:49 crc kubenswrapper[5058]: E1014 09:26:49.594067 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc66c44-6583-4165-9d04-012b1fed763a" containerName="validate-network-openstack-openstack-cell2" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.594092 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc66c44-6583-4165-9d04-012b1fed763a" containerName="validate-network-openstack-openstack-cell2" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.594367 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc66c44-6583-4165-9d04-012b1fed763a" containerName="validate-network-openstack-openstack-cell2" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.595382 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.599052 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.603547 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell2-bfxmn"] Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.636842 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.697438 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-inventory\") pod \"install-os-openstack-openstack-cell2-bfxmn\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.697615 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z68bc\" (UniqueName: \"kubernetes.io/projected/38f7f7fa-b19d-445f-bf0a-97633b0234bb-kube-api-access-z68bc\") pod \"install-os-openstack-openstack-cell2-bfxmn\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.697682 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-ssh-key\") pod \"install-os-openstack-openstack-cell2-bfxmn\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.800137 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-ssh-key\") pod \"install-os-openstack-openstack-cell2-bfxmn\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.800255 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-inventory\") pod \"install-os-openstack-openstack-cell2-bfxmn\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.810917 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z68bc\" (UniqueName: \"kubernetes.io/projected/38f7f7fa-b19d-445f-bf0a-97633b0234bb-kube-api-access-z68bc\") pod \"install-os-openstack-openstack-cell2-bfxmn\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.817770 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-ssh-key\") pod \"install-os-openstack-openstack-cell2-bfxmn\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.820629 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-inventory\") pod \"install-os-openstack-openstack-cell2-bfxmn\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.829720 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z68bc\" (UniqueName: \"kubernetes.io/projected/38f7f7fa-b19d-445f-bf0a-97633b0234bb-kube-api-access-z68bc\") pod \"install-os-openstack-openstack-cell2-bfxmn\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:49 crc kubenswrapper[5058]: I1014 09:26:49.967173 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:26:50 crc kubenswrapper[5058]: I1014 09:26:50.575749 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell2-bfxmn"] Oct 14 09:26:51 crc kubenswrapper[5058]: I1014 09:26:51.517768 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell2-bfxmn" event={"ID":"38f7f7fa-b19d-445f-bf0a-97633b0234bb","Type":"ContainerStarted","Data":"7c2d37bc1179a28799dc6407a012e3c27c4a7af167c2a430cf68b186b45cbc9e"} Oct 14 09:26:51 crc kubenswrapper[5058]: I1014 09:26:51.518151 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell2-bfxmn" event={"ID":"38f7f7fa-b19d-445f-bf0a-97633b0234bb","Type":"ContainerStarted","Data":"881fbf965ba52e3a36172bb931da7376a325ce92c4c187442534933508c92029"} Oct 14 09:26:51 crc kubenswrapper[5058]: I1014 09:26:51.541266 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell2-bfxmn" podStartSLOduration=2.022328304 podStartE2EDuration="2.541246405s" podCreationTimestamp="2025-10-14 09:26:49 +0000 UTC" firstStartedPulling="2025-10-14 09:26:50.58534997 +0000 UTC m=+9558.496433776" lastFinishedPulling="2025-10-14 09:26:51.104268061 +0000 UTC m=+9559.015351877" observedRunningTime="2025-10-14 09:26:51.534929195 +0000 UTC m=+9559.446013061" watchObservedRunningTime="2025-10-14 09:26:51.541246405 +0000 UTC m=+9559.452330221" Oct 14 09:27:36 crc kubenswrapper[5058]: I1014 09:27:36.067609 5058 generic.go:334] "Generic (PLEG): container finished" podID="25f8c470-b427-4e78-a4b3-80e6fe632075" containerID="56761cce620efad937471ddfed0568e70745590c84ec671f84e749edfdb829b9" exitCode=0 Oct 14 09:27:36 crc kubenswrapper[5058]: I1014 09:27:36.067678 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-sthwr" event={"ID":"25f8c470-b427-4e78-a4b3-80e6fe632075","Type":"ContainerDied","Data":"56761cce620efad937471ddfed0568e70745590c84ec671f84e749edfdb829b9"} Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.510760 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.595212 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srqwn\" (UniqueName: \"kubernetes.io/projected/25f8c470-b427-4e78-a4b3-80e6fe632075-kube-api-access-srqwn\") pod \"25f8c470-b427-4e78-a4b3-80e6fe632075\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.595323 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-inventory\") pod \"25f8c470-b427-4e78-a4b3-80e6fe632075\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.595489 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-ssh-key\") pod \"25f8c470-b427-4e78-a4b3-80e6fe632075\" (UID: \"25f8c470-b427-4e78-a4b3-80e6fe632075\") " Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.601029 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f8c470-b427-4e78-a4b3-80e6fe632075-kube-api-access-srqwn" (OuterVolumeSpecName: "kube-api-access-srqwn") pod "25f8c470-b427-4e78-a4b3-80e6fe632075" (UID: "25f8c470-b427-4e78-a4b3-80e6fe632075"). InnerVolumeSpecName "kube-api-access-srqwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.623388 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-inventory" (OuterVolumeSpecName: "inventory") pod "25f8c470-b427-4e78-a4b3-80e6fe632075" (UID: "25f8c470-b427-4e78-a4b3-80e6fe632075"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.637550 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "25f8c470-b427-4e78-a4b3-80e6fe632075" (UID: "25f8c470-b427-4e78-a4b3-80e6fe632075"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.698758 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.698816 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srqwn\" (UniqueName: \"kubernetes.io/projected/25f8c470-b427-4e78-a4b3-80e6fe632075-kube-api-access-srqwn\") on node \"crc\" DevicePath \"\"" Oct 14 09:27:37 crc kubenswrapper[5058]: I1014 09:27:37.698833 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25f8c470-b427-4e78-a4b3-80e6fe632075-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.088892 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-sthwr" event={"ID":"25f8c470-b427-4e78-a4b3-80e6fe632075","Type":"ContainerDied","Data":"84eb647bffc8ed4080450b8dec7f4360ffc4932493bea2362f471ce76e78a5f2"} Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.088940 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84eb647bffc8ed4080450b8dec7f4360ffc4932493bea2362f471ce76e78a5f2" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.088970 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-sthwr" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.194758 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-ncjhg"] Oct 14 09:27:38 crc kubenswrapper[5058]: E1014 09:27:38.195930 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f8c470-b427-4e78-a4b3-80e6fe632075" containerName="install-os-openstack-openstack-cell1" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.195960 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f8c470-b427-4e78-a4b3-80e6fe632075" containerName="install-os-openstack-openstack-cell1" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.196348 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f8c470-b427-4e78-a4b3-80e6fe632075" containerName="install-os-openstack-openstack-cell1" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.197845 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.202479 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.202584 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.213073 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-ncjhg"] Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.318373 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-ssh-key\") pod \"configure-os-openstack-openstack-cell1-ncjhg\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.318461 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nscw\" (UniqueName: \"kubernetes.io/projected/67663dcb-1534-4133-a168-02fcde69c9b0-kube-api-access-7nscw\") pod \"configure-os-openstack-openstack-cell1-ncjhg\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.318509 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-inventory\") pod \"configure-os-openstack-openstack-cell1-ncjhg\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.420537 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-ssh-key\") pod \"configure-os-openstack-openstack-cell1-ncjhg\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.421401 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nscw\" (UniqueName: \"kubernetes.io/projected/67663dcb-1534-4133-a168-02fcde69c9b0-kube-api-access-7nscw\") pod \"configure-os-openstack-openstack-cell1-ncjhg\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.421434 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-inventory\") pod \"configure-os-openstack-openstack-cell1-ncjhg\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.426209 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-ssh-key\") pod \"configure-os-openstack-openstack-cell1-ncjhg\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.426713 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-inventory\") pod \"configure-os-openstack-openstack-cell1-ncjhg\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.449738 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nscw\" (UniqueName: \"kubernetes.io/projected/67663dcb-1534-4133-a168-02fcde69c9b0-kube-api-access-7nscw\") pod \"configure-os-openstack-openstack-cell1-ncjhg\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:38 crc kubenswrapper[5058]: I1014 09:27:38.530540 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:27:39 crc kubenswrapper[5058]: I1014 09:27:39.123873 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-ncjhg"] Oct 14 09:27:40 crc kubenswrapper[5058]: I1014 09:27:40.116428 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" event={"ID":"67663dcb-1534-4133-a168-02fcde69c9b0","Type":"ContainerStarted","Data":"c12281357fd75de9843fb49b2288ef2e2bfcf117ec0a34b558f795471d60b3d1"} Oct 14 09:27:40 crc kubenswrapper[5058]: I1014 09:27:40.120451 5058 generic.go:334] "Generic (PLEG): container finished" podID="38f7f7fa-b19d-445f-bf0a-97633b0234bb" containerID="7c2d37bc1179a28799dc6407a012e3c27c4a7af167c2a430cf68b186b45cbc9e" exitCode=0 Oct 14 09:27:40 crc kubenswrapper[5058]: I1014 09:27:40.120494 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell2-bfxmn" event={"ID":"38f7f7fa-b19d-445f-bf0a-97633b0234bb","Type":"ContainerDied","Data":"7c2d37bc1179a28799dc6407a012e3c27c4a7af167c2a430cf68b186b45cbc9e"} Oct 14 09:27:41 crc kubenswrapper[5058]: I1014 09:27:41.132234 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" event={"ID":"67663dcb-1534-4133-a168-02fcde69c9b0","Type":"ContainerStarted","Data":"540ce305dede8c4cb7fadacdf7c978910b0738591903b541931d11df69467f75"} Oct 14 09:27:41 crc kubenswrapper[5058]: I1014 09:27:41.166460 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" podStartSLOduration=2.38092256 podStartE2EDuration="3.16643992s" podCreationTimestamp="2025-10-14 09:27:38 +0000 UTC" firstStartedPulling="2025-10-14 09:27:39.510245334 +0000 UTC m=+9607.421329180" lastFinishedPulling="2025-10-14 09:27:40.295762704 +0000 UTC m=+9608.206846540" observedRunningTime="2025-10-14 09:27:41.161617232 +0000 UTC m=+9609.072701039" watchObservedRunningTime="2025-10-14 09:27:41.16643992 +0000 UTC m=+9609.077523736" Oct 14 09:27:41 crc kubenswrapper[5058]: I1014 09:27:41.942998 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.108576 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z68bc\" (UniqueName: \"kubernetes.io/projected/38f7f7fa-b19d-445f-bf0a-97633b0234bb-kube-api-access-z68bc\") pod \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.110203 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-inventory\") pod \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.110284 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-ssh-key\") pod \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\" (UID: \"38f7f7fa-b19d-445f-bf0a-97633b0234bb\") " Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.114348 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f7f7fa-b19d-445f-bf0a-97633b0234bb-kube-api-access-z68bc" (OuterVolumeSpecName: "kube-api-access-z68bc") pod "38f7f7fa-b19d-445f-bf0a-97633b0234bb" (UID: "38f7f7fa-b19d-445f-bf0a-97633b0234bb"). InnerVolumeSpecName "kube-api-access-z68bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.140366 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "38f7f7fa-b19d-445f-bf0a-97633b0234bb" (UID: "38f7f7fa-b19d-445f-bf0a-97633b0234bb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.149214 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell2-bfxmn" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.149497 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell2-bfxmn" event={"ID":"38f7f7fa-b19d-445f-bf0a-97633b0234bb","Type":"ContainerDied","Data":"881fbf965ba52e3a36172bb931da7376a325ce92c4c187442534933508c92029"} Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.149536 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="881fbf965ba52e3a36172bb931da7376a325ce92c4c187442534933508c92029" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.152580 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-inventory" (OuterVolumeSpecName: "inventory") pod "38f7f7fa-b19d-445f-bf0a-97633b0234bb" (UID: "38f7f7fa-b19d-445f-bf0a-97633b0234bb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.213160 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z68bc\" (UniqueName: \"kubernetes.io/projected/38f7f7fa-b19d-445f-bf0a-97633b0234bb-kube-api-access-z68bc\") on node \"crc\" DevicePath \"\"" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.213214 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.213232 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38f7f7fa-b19d-445f-bf0a-97633b0234bb-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.250535 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell2-zc4mc"] Oct 14 09:27:42 crc kubenswrapper[5058]: E1014 09:27:42.251078 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f7f7fa-b19d-445f-bf0a-97633b0234bb" containerName="install-os-openstack-openstack-cell2" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.251102 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f7f7fa-b19d-445f-bf0a-97633b0234bb" containerName="install-os-openstack-openstack-cell2" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.251391 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f7f7fa-b19d-445f-bf0a-97633b0234bb" containerName="install-os-openstack-openstack-cell2" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.252335 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.269192 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell2-zc4mc"] Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.421727 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-inventory\") pod \"configure-os-openstack-openstack-cell2-zc4mc\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.421787 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshhn\" (UniqueName: \"kubernetes.io/projected/63658dc1-97e3-43d3-9bf8-717180f1aeca-kube-api-access-sshhn\") pod \"configure-os-openstack-openstack-cell2-zc4mc\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.421887 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-ssh-key\") pod \"configure-os-openstack-openstack-cell2-zc4mc\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.524058 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-inventory\") pod \"configure-os-openstack-openstack-cell2-zc4mc\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.524122 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sshhn\" (UniqueName: \"kubernetes.io/projected/63658dc1-97e3-43d3-9bf8-717180f1aeca-kube-api-access-sshhn\") pod \"configure-os-openstack-openstack-cell2-zc4mc\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.524194 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-ssh-key\") pod \"configure-os-openstack-openstack-cell2-zc4mc\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.528298 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-ssh-key\") pod \"configure-os-openstack-openstack-cell2-zc4mc\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.545457 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-inventory\") pod \"configure-os-openstack-openstack-cell2-zc4mc\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.548948 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sshhn\" (UniqueName: \"kubernetes.io/projected/63658dc1-97e3-43d3-9bf8-717180f1aeca-kube-api-access-sshhn\") pod \"configure-os-openstack-openstack-cell2-zc4mc\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:42 crc kubenswrapper[5058]: I1014 09:27:42.599387 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:27:43 crc kubenswrapper[5058]: I1014 09:27:43.227393 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell2-zc4mc"] Oct 14 09:27:44 crc kubenswrapper[5058]: I1014 09:27:44.183637 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" event={"ID":"63658dc1-97e3-43d3-9bf8-717180f1aeca","Type":"ContainerStarted","Data":"5c475f898263a0fa26119add352ff3a916623f29377b72a5e591a9b67f886c34"} Oct 14 09:27:44 crc kubenswrapper[5058]: I1014 09:27:44.184208 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" event={"ID":"63658dc1-97e3-43d3-9bf8-717180f1aeca","Type":"ContainerStarted","Data":"236316b6b2903f021161f79feb361dae4d5987d6f8390ec9009c49fb01c75d77"} Oct 14 09:27:44 crc kubenswrapper[5058]: I1014 09:27:44.212854 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" podStartSLOduration=1.805353045 podStartE2EDuration="2.212832829s" podCreationTimestamp="2025-10-14 09:27:42 +0000 UTC" firstStartedPulling="2025-10-14 09:27:43.239037433 +0000 UTC m=+9611.150121279" lastFinishedPulling="2025-10-14 09:27:43.646517217 +0000 UTC m=+9611.557601063" observedRunningTime="2025-10-14 09:27:44.206337554 +0000 UTC m=+9612.117421390" watchObservedRunningTime="2025-10-14 09:27:44.212832829 +0000 UTC m=+9612.123916645" Oct 14 09:28:34 crc kubenswrapper[5058]: I1014 09:28:34.835313 5058 generic.go:334] "Generic (PLEG): container finished" podID="67663dcb-1534-4133-a168-02fcde69c9b0" containerID="540ce305dede8c4cb7fadacdf7c978910b0738591903b541931d11df69467f75" exitCode=2 Oct 14 09:28:34 crc kubenswrapper[5058]: I1014 09:28:34.835428 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" event={"ID":"67663dcb-1534-4133-a168-02fcde69c9b0","Type":"ContainerDied","Data":"540ce305dede8c4cb7fadacdf7c978910b0738591903b541931d11df69467f75"} Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.154101 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.308333 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-inventory\") pod \"67663dcb-1534-4133-a168-02fcde69c9b0\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.308430 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nscw\" (UniqueName: \"kubernetes.io/projected/67663dcb-1534-4133-a168-02fcde69c9b0-kube-api-access-7nscw\") pod \"67663dcb-1534-4133-a168-02fcde69c9b0\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.308571 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-ssh-key\") pod \"67663dcb-1534-4133-a168-02fcde69c9b0\" (UID: \"67663dcb-1534-4133-a168-02fcde69c9b0\") " Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.315102 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67663dcb-1534-4133-a168-02fcde69c9b0-kube-api-access-7nscw" (OuterVolumeSpecName: "kube-api-access-7nscw") pod "67663dcb-1534-4133-a168-02fcde69c9b0" (UID: "67663dcb-1534-4133-a168-02fcde69c9b0"). InnerVolumeSpecName "kube-api-access-7nscw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.337175 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "67663dcb-1534-4133-a168-02fcde69c9b0" (UID: "67663dcb-1534-4133-a168-02fcde69c9b0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.337947 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-inventory" (OuterVolumeSpecName: "inventory") pod "67663dcb-1534-4133-a168-02fcde69c9b0" (UID: "67663dcb-1534-4133-a168-02fcde69c9b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.411985 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.412324 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nscw\" (UniqueName: \"kubernetes.io/projected/67663dcb-1534-4133-a168-02fcde69c9b0-kube-api-access-7nscw\") on node \"crc\" DevicePath \"\"" Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.412360 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67663dcb-1534-4133-a168-02fcde69c9b0-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.884933 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" event={"ID":"67663dcb-1534-4133-a168-02fcde69c9b0","Type":"ContainerDied","Data":"c12281357fd75de9843fb49b2288ef2e2bfcf117ec0a34b558f795471d60b3d1"} Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.884984 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-ncjhg" Oct 14 09:28:37 crc kubenswrapper[5058]: I1014 09:28:37.885005 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12281357fd75de9843fb49b2288ef2e2bfcf117ec0a34b558f795471d60b3d1" Oct 14 09:28:38 crc kubenswrapper[5058]: I1014 09:28:38.900388 5058 generic.go:334] "Generic (PLEG): container finished" podID="63658dc1-97e3-43d3-9bf8-717180f1aeca" containerID="5c475f898263a0fa26119add352ff3a916623f29377b72a5e591a9b67f886c34" exitCode=2 Oct 14 09:28:38 crc kubenswrapper[5058]: I1014 09:28:38.900625 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" event={"ID":"63658dc1-97e3-43d3-9bf8-717180f1aeca","Type":"ContainerDied","Data":"5c475f898263a0fa26119add352ff3a916623f29377b72a5e591a9b67f886c34"} Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.552251 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.699545 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-inventory\") pod \"63658dc1-97e3-43d3-9bf8-717180f1aeca\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.699889 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-ssh-key\") pod \"63658dc1-97e3-43d3-9bf8-717180f1aeca\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.700113 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sshhn\" (UniqueName: \"kubernetes.io/projected/63658dc1-97e3-43d3-9bf8-717180f1aeca-kube-api-access-sshhn\") pod \"63658dc1-97e3-43d3-9bf8-717180f1aeca\" (UID: \"63658dc1-97e3-43d3-9bf8-717180f1aeca\") " Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.705864 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63658dc1-97e3-43d3-9bf8-717180f1aeca-kube-api-access-sshhn" (OuterVolumeSpecName: "kube-api-access-sshhn") pod "63658dc1-97e3-43d3-9bf8-717180f1aeca" (UID: "63658dc1-97e3-43d3-9bf8-717180f1aeca"). InnerVolumeSpecName "kube-api-access-sshhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.738400 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "63658dc1-97e3-43d3-9bf8-717180f1aeca" (UID: "63658dc1-97e3-43d3-9bf8-717180f1aeca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.738435 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-inventory" (OuterVolumeSpecName: "inventory") pod "63658dc1-97e3-43d3-9bf8-717180f1aeca" (UID: "63658dc1-97e3-43d3-9bf8-717180f1aeca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.803899 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.803988 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sshhn\" (UniqueName: \"kubernetes.io/projected/63658dc1-97e3-43d3-9bf8-717180f1aeca-kube-api-access-sshhn\") on node \"crc\" DevicePath \"\"" Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.804048 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63658dc1-97e3-43d3-9bf8-717180f1aeca-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.931467 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" event={"ID":"63658dc1-97e3-43d3-9bf8-717180f1aeca","Type":"ContainerDied","Data":"236316b6b2903f021161f79feb361dae4d5987d6f8390ec9009c49fb01c75d77"} Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.931732 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="236316b6b2903f021161f79feb361dae4d5987d6f8390ec9009c49fb01c75d77" Oct 14 09:28:40 crc kubenswrapper[5058]: I1014 09:28:40.931531 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell2-zc4mc" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.040362 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-5bpns"] Oct 14 09:28:44 crc kubenswrapper[5058]: E1014 09:28:44.042519 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63658dc1-97e3-43d3-9bf8-717180f1aeca" containerName="configure-os-openstack-openstack-cell2" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.042620 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="63658dc1-97e3-43d3-9bf8-717180f1aeca" containerName="configure-os-openstack-openstack-cell2" Oct 14 09:28:44 crc kubenswrapper[5058]: E1014 09:28:44.042743 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67663dcb-1534-4133-a168-02fcde69c9b0" containerName="configure-os-openstack-openstack-cell1" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.042847 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="67663dcb-1534-4133-a168-02fcde69c9b0" containerName="configure-os-openstack-openstack-cell1" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.043249 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="67663dcb-1534-4133-a168-02fcde69c9b0" containerName="configure-os-openstack-openstack-cell1" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.043349 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="63658dc1-97e3-43d3-9bf8-717180f1aeca" containerName="configure-os-openstack-openstack-cell2" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.044318 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.049727 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.050162 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.055931 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.059497 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.066933 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-5bpns"] Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.187294 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzqtj\" (UniqueName: \"kubernetes.io/projected/eb3265ea-5f4a-453d-ad52-b9e735a5c693-kube-api-access-fzqtj\") pod \"configure-os-openstack-openstack-cell1-5bpns\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.188009 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-inventory\") pod \"configure-os-openstack-openstack-cell1-5bpns\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.188203 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-ssh-key\") pod \"configure-os-openstack-openstack-cell1-5bpns\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.291653 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzqtj\" (UniqueName: \"kubernetes.io/projected/eb3265ea-5f4a-453d-ad52-b9e735a5c693-kube-api-access-fzqtj\") pod \"configure-os-openstack-openstack-cell1-5bpns\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.291922 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-inventory\") pod \"configure-os-openstack-openstack-cell1-5bpns\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.292037 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-ssh-key\") pod \"configure-os-openstack-openstack-cell1-5bpns\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.300643 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-inventory\") pod \"configure-os-openstack-openstack-cell1-5bpns\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.302045 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-ssh-key\") pod \"configure-os-openstack-openstack-cell1-5bpns\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.318308 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzqtj\" (UniqueName: \"kubernetes.io/projected/eb3265ea-5f4a-453d-ad52-b9e735a5c693-kube-api-access-fzqtj\") pod \"configure-os-openstack-openstack-cell1-5bpns\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.381921 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.774416 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-5bpns"] Oct 14 09:28:44 crc kubenswrapper[5058]: I1014 09:28:44.994456 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-5bpns" event={"ID":"eb3265ea-5f4a-453d-ad52-b9e735a5c693","Type":"ContainerStarted","Data":"aa8b5eeab29d34ea3c0cf552a5c6ea04ab460252a1631294716fe7f3e707004e"} Oct 14 09:28:46 crc kubenswrapper[5058]: I1014 09:28:46.013493 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-5bpns" event={"ID":"eb3265ea-5f4a-453d-ad52-b9e735a5c693","Type":"ContainerStarted","Data":"bba28825653b02aac428bc38c0f0f5b195d8470ba7d3e710d1ff2e04ab7aaa78"} Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.025683 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-5bpns" podStartSLOduration=3.593803424 podStartE2EDuration="4.025664923s" podCreationTimestamp="2025-10-14 09:28:44 +0000 UTC" firstStartedPulling="2025-10-14 09:28:44.767013593 +0000 UTC m=+9672.678097439" lastFinishedPulling="2025-10-14 09:28:45.198875092 +0000 UTC m=+9673.109958938" observedRunningTime="2025-10-14 09:28:46.05556677 +0000 UTC m=+9673.966650616" watchObservedRunningTime="2025-10-14 09:28:48.025664923 +0000 UTC m=+9675.936748739" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.028974 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell2-j89ml"] Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.030500 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.037185 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.039030 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.062094 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell2-j89ml"] Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.114330 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-ssh-key\") pod \"configure-os-openstack-openstack-cell2-j89ml\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.114520 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-inventory\") pod \"configure-os-openstack-openstack-cell2-j89ml\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.114609 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh5rj\" (UniqueName: \"kubernetes.io/projected/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-kube-api-access-zh5rj\") pod \"configure-os-openstack-openstack-cell2-j89ml\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.216232 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-ssh-key\") pod \"configure-os-openstack-openstack-cell2-j89ml\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.216370 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-inventory\") pod \"configure-os-openstack-openstack-cell2-j89ml\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.216404 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh5rj\" (UniqueName: \"kubernetes.io/projected/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-kube-api-access-zh5rj\") pod \"configure-os-openstack-openstack-cell2-j89ml\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.222720 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-inventory\") pod \"configure-os-openstack-openstack-cell2-j89ml\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.222790 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-ssh-key\") pod \"configure-os-openstack-openstack-cell2-j89ml\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.237068 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh5rj\" (UniqueName: \"kubernetes.io/projected/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-kube-api-access-zh5rj\") pod \"configure-os-openstack-openstack-cell2-j89ml\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.353477 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:28:48 crc kubenswrapper[5058]: I1014 09:28:48.927711 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell2-j89ml"] Oct 14 09:28:49 crc kubenswrapper[5058]: W1014 09:28:49.500627 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6cd75bd_8b9a_4ee4_b2b8_990f4c982d10.slice/crio-30a37799a360ff2107e1511eb613cd09332b31200447a9c75503f1be8627394b WatchSource:0}: Error finding container 30a37799a360ff2107e1511eb613cd09332b31200447a9c75503f1be8627394b: Status 404 returned error can't find the container with id 30a37799a360ff2107e1511eb613cd09332b31200447a9c75503f1be8627394b Oct 14 09:28:50 crc kubenswrapper[5058]: I1014 09:28:50.066133 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell2-j89ml" event={"ID":"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10","Type":"ContainerStarted","Data":"30a37799a360ff2107e1511eb613cd09332b31200447a9c75503f1be8627394b"} Oct 14 09:28:51 crc kubenswrapper[5058]: I1014 09:28:51.101122 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell2-j89ml" event={"ID":"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10","Type":"ContainerStarted","Data":"b883bbb1d934c456181bd342c4139cf691b05f4eb8862018b66470909cbbcaae"} Oct 14 09:28:51 crc kubenswrapper[5058]: I1014 09:28:51.131978 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell2-j89ml" podStartSLOduration=2.712275337 podStartE2EDuration="3.131951849s" podCreationTimestamp="2025-10-14 09:28:48 +0000 UTC" firstStartedPulling="2025-10-14 09:28:49.515091525 +0000 UTC m=+9677.426175361" lastFinishedPulling="2025-10-14 09:28:49.934768027 +0000 UTC m=+9677.845851873" observedRunningTime="2025-10-14 09:28:51.120043019 +0000 UTC m=+9679.031126865" watchObservedRunningTime="2025-10-14 09:28:51.131951849 +0000 UTC m=+9679.043035695" Oct 14 09:29:03 crc kubenswrapper[5058]: I1014 09:29:03.655913 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:29:03 crc kubenswrapper[5058]: I1014 09:29:03.656652 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:29:32 crc kubenswrapper[5058]: I1014 09:29:32.640255 5058 generic.go:334] "Generic (PLEG): container finished" podID="eb3265ea-5f4a-453d-ad52-b9e735a5c693" containerID="bba28825653b02aac428bc38c0f0f5b195d8470ba7d3e710d1ff2e04ab7aaa78" exitCode=0 Oct 14 09:29:32 crc kubenswrapper[5058]: I1014 09:29:32.640392 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-5bpns" event={"ID":"eb3265ea-5f4a-453d-ad52-b9e735a5c693","Type":"ContainerDied","Data":"bba28825653b02aac428bc38c0f0f5b195d8470ba7d3e710d1ff2e04ab7aaa78"} Oct 14 09:29:33 crc kubenswrapper[5058]: I1014 09:29:33.656329 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:29:33 crc kubenswrapper[5058]: I1014 09:29:33.656898 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.616780 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.677006 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-5bpns" event={"ID":"eb3265ea-5f4a-453d-ad52-b9e735a5c693","Type":"ContainerDied","Data":"aa8b5eeab29d34ea3c0cf552a5c6ea04ab460252a1631294716fe7f3e707004e"} Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.677038 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa8b5eeab29d34ea3c0cf552a5c6ea04ab460252a1631294716fe7f3e707004e" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.677116 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-5bpns" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.768064 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-ssh-key\") pod \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.768145 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzqtj\" (UniqueName: \"kubernetes.io/projected/eb3265ea-5f4a-453d-ad52-b9e735a5c693-kube-api-access-fzqtj\") pod \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.768465 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-inventory\") pod \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\" (UID: \"eb3265ea-5f4a-453d-ad52-b9e735a5c693\") " Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.774932 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-k65r8"] Oct 14 09:29:34 crc kubenswrapper[5058]: E1014 09:29:34.775414 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3265ea-5f4a-453d-ad52-b9e735a5c693" containerName="configure-os-openstack-openstack-cell1" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.775430 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3265ea-5f4a-453d-ad52-b9e735a5c693" containerName="configure-os-openstack-openstack-cell1" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.775688 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3265ea-5f4a-453d-ad52-b9e735a5c693" containerName="configure-os-openstack-openstack-cell1" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.776650 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.781991 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3265ea-5f4a-453d-ad52-b9e735a5c693-kube-api-access-fzqtj" (OuterVolumeSpecName: "kube-api-access-fzqtj") pod "eb3265ea-5f4a-453d-ad52-b9e735a5c693" (UID: "eb3265ea-5f4a-453d-ad52-b9e735a5c693"). InnerVolumeSpecName "kube-api-access-fzqtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.813625 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-inventory" (OuterVolumeSpecName: "inventory") pod "eb3265ea-5f4a-453d-ad52-b9e735a5c693" (UID: "eb3265ea-5f4a-453d-ad52-b9e735a5c693"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.827668 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb3265ea-5f4a-453d-ad52-b9e735a5c693" (UID: "eb3265ea-5f4a-453d-ad52-b9e735a5c693"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.870131 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmxqw\" (UniqueName: \"kubernetes.io/projected/98754677-6b19-4bca-abef-380800bf88ac-kube-api-access-dmxqw\") pod \"run-os-openstack-openstack-cell1-k65r8\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.870256 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-ssh-key\") pod \"run-os-openstack-openstack-cell1-k65r8\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.870334 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-inventory\") pod \"run-os-openstack-openstack-cell1-k65r8\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.870399 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.870416 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb3265ea-5f4a-453d-ad52-b9e735a5c693-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.870426 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzqtj\" (UniqueName: \"kubernetes.io/projected/eb3265ea-5f4a-453d-ad52-b9e735a5c693-kube-api-access-fzqtj\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.911644 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-k65r8"] Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.971569 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-inventory\") pod \"run-os-openstack-openstack-cell1-k65r8\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.971849 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmxqw\" (UniqueName: \"kubernetes.io/projected/98754677-6b19-4bca-abef-380800bf88ac-kube-api-access-dmxqw\") pod \"run-os-openstack-openstack-cell1-k65r8\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.971982 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-ssh-key\") pod \"run-os-openstack-openstack-cell1-k65r8\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.982862 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-inventory\") pod \"run-os-openstack-openstack-cell1-k65r8\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.985662 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-ssh-key\") pod \"run-os-openstack-openstack-cell1-k65r8\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:34 crc kubenswrapper[5058]: I1014 09:29:34.988213 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmxqw\" (UniqueName: \"kubernetes.io/projected/98754677-6b19-4bca-abef-380800bf88ac-kube-api-access-dmxqw\") pod \"run-os-openstack-openstack-cell1-k65r8\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:35 crc kubenswrapper[5058]: E1014 09:29:35.103551 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb3265ea_5f4a_453d_ad52_b9e735a5c693.slice\": RecentStats: unable to find data in memory cache]" Oct 14 09:29:35 crc kubenswrapper[5058]: I1014 09:29:35.225653 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:35 crc kubenswrapper[5058]: I1014 09:29:35.633624 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-k65r8"] Oct 14 09:29:35 crc kubenswrapper[5058]: I1014 09:29:35.691619 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-k65r8" event={"ID":"98754677-6b19-4bca-abef-380800bf88ac","Type":"ContainerStarted","Data":"ddae03832bffa2363ed9a6dcd14c543a7d8256423a17cb611f90720a62d02849"} Oct 14 09:29:37 crc kubenswrapper[5058]: I1014 09:29:37.725096 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-k65r8" event={"ID":"98754677-6b19-4bca-abef-380800bf88ac","Type":"ContainerStarted","Data":"2db7e87b9615769a9b51be84fe9cec45df1358dbb14cfa59171f9607138febec"} Oct 14 09:29:37 crc kubenswrapper[5058]: I1014 09:29:37.749092 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-k65r8" podStartSLOduration=3.280520952 podStartE2EDuration="3.749066857s" podCreationTimestamp="2025-10-14 09:29:34 +0000 UTC" firstStartedPulling="2025-10-14 09:29:35.638675086 +0000 UTC m=+9723.549758922" lastFinishedPulling="2025-10-14 09:29:36.107220991 +0000 UTC m=+9724.018304827" observedRunningTime="2025-10-14 09:29:37.748309495 +0000 UTC m=+9725.659393341" watchObservedRunningTime="2025-10-14 09:29:37.749066857 +0000 UTC m=+9725.660150673" Oct 14 09:29:42 crc kubenswrapper[5058]: I1014 09:29:42.800509 5058 generic.go:334] "Generic (PLEG): container finished" podID="a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10" containerID="b883bbb1d934c456181bd342c4139cf691b05f4eb8862018b66470909cbbcaae" exitCode=0 Oct 14 09:29:42 crc kubenswrapper[5058]: I1014 09:29:42.845360 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell2-j89ml" event={"ID":"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10","Type":"ContainerDied","Data":"b883bbb1d934c456181bd342c4139cf691b05f4eb8862018b66470909cbbcaae"} Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.355296 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.426074 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh5rj\" (UniqueName: \"kubernetes.io/projected/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-kube-api-access-zh5rj\") pod \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.426255 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-ssh-key\") pod \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.426358 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-inventory\") pod \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\" (UID: \"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10\") " Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.444415 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-kube-api-access-zh5rj" (OuterVolumeSpecName: "kube-api-access-zh5rj") pod "a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10" (UID: "a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10"). InnerVolumeSpecName "kube-api-access-zh5rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.453289 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10" (UID: "a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.467027 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-inventory" (OuterVolumeSpecName: "inventory") pod "a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10" (UID: "a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.529617 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.529684 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.529716 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh5rj\" (UniqueName: \"kubernetes.io/projected/a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10-kube-api-access-zh5rj\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.835489 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell2-j89ml" event={"ID":"a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10","Type":"ContainerDied","Data":"30a37799a360ff2107e1511eb613cd09332b31200447a9c75503f1be8627394b"} Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.835562 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30a37799a360ff2107e1511eb613cd09332b31200447a9c75503f1be8627394b" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.835571 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell2-j89ml" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.964203 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-n2d66"] Oct 14 09:29:44 crc kubenswrapper[5058]: E1014 09:29:44.965271 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10" containerName="configure-os-openstack-openstack-cell2" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.965304 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10" containerName="configure-os-openstack-openstack-cell2" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.965691 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10" containerName="configure-os-openstack-openstack-cell2" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.966984 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.971715 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.971787 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:29:44 crc kubenswrapper[5058]: I1014 09:29:44.978438 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-n2d66"] Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.039770 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-0\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.039895 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9ztd\" (UniqueName: \"kubernetes.io/projected/dd8f52ac-f61f-4197-be85-6a78e8e27159-kube-api-access-g9ztd\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.039944 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.040000 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-1\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.040206 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell2\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell2\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.142249 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-0\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.142311 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9ztd\" (UniqueName: \"kubernetes.io/projected/dd8f52ac-f61f-4197-be85-6a78e8e27159-kube-api-access-g9ztd\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.142333 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.142370 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-1\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.142471 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell2\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell2\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.149780 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-0\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.150522 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell2\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell2\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.152073 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-1\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.159018 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.162635 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9ztd\" (UniqueName: \"kubernetes.io/projected/dd8f52ac-f61f-4197-be85-6a78e8e27159-kube-api-access-g9ztd\") pod \"ssh-known-hosts-openstack-n2d66\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.304702 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.853931 5058 generic.go:334] "Generic (PLEG): container finished" podID="98754677-6b19-4bca-abef-380800bf88ac" containerID="2db7e87b9615769a9b51be84fe9cec45df1358dbb14cfa59171f9607138febec" exitCode=0 Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.854079 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-k65r8" event={"ID":"98754677-6b19-4bca-abef-380800bf88ac","Type":"ContainerDied","Data":"2db7e87b9615769a9b51be84fe9cec45df1358dbb14cfa59171f9607138febec"} Oct 14 09:29:45 crc kubenswrapper[5058]: I1014 09:29:45.999122 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-n2d66"] Oct 14 09:29:46 crc kubenswrapper[5058]: I1014 09:29:46.874316 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-n2d66" event={"ID":"dd8f52ac-f61f-4197-be85-6a78e8e27159","Type":"ContainerStarted","Data":"7f01929a1484bd1affe9efae6b2c27b5dd41669c93b575ef6f12e7b275b9356c"} Oct 14 09:29:46 crc kubenswrapper[5058]: I1014 09:29:46.874903 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-n2d66" event={"ID":"dd8f52ac-f61f-4197-be85-6a78e8e27159","Type":"ContainerStarted","Data":"de169f48506449a62374e5a8496f2b9432ac0689e89eef672a4045dda8210bed"} Oct 14 09:29:46 crc kubenswrapper[5058]: I1014 09:29:46.905760 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-n2d66" podStartSLOduration=2.426504373 podStartE2EDuration="2.905729512s" podCreationTimestamp="2025-10-14 09:29:44 +0000 UTC" firstStartedPulling="2025-10-14 09:29:46.000323626 +0000 UTC m=+9733.911407472" lastFinishedPulling="2025-10-14 09:29:46.479548795 +0000 UTC m=+9734.390632611" observedRunningTime="2025-10-14 09:29:46.890986462 +0000 UTC m=+9734.802070308" watchObservedRunningTime="2025-10-14 09:29:46.905729512 +0000 UTC m=+9734.816813358" Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.342061 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.411460 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmxqw\" (UniqueName: \"kubernetes.io/projected/98754677-6b19-4bca-abef-380800bf88ac-kube-api-access-dmxqw\") pod \"98754677-6b19-4bca-abef-380800bf88ac\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.412164 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-ssh-key\") pod \"98754677-6b19-4bca-abef-380800bf88ac\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.412212 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-inventory\") pod \"98754677-6b19-4bca-abef-380800bf88ac\" (UID: \"98754677-6b19-4bca-abef-380800bf88ac\") " Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.418716 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98754677-6b19-4bca-abef-380800bf88ac-kube-api-access-dmxqw" (OuterVolumeSpecName: "kube-api-access-dmxqw") pod "98754677-6b19-4bca-abef-380800bf88ac" (UID: "98754677-6b19-4bca-abef-380800bf88ac"). InnerVolumeSpecName "kube-api-access-dmxqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.442389 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "98754677-6b19-4bca-abef-380800bf88ac" (UID: "98754677-6b19-4bca-abef-380800bf88ac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.444327 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-inventory" (OuterVolumeSpecName: "inventory") pod "98754677-6b19-4bca-abef-380800bf88ac" (UID: "98754677-6b19-4bca-abef-380800bf88ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.515194 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmxqw\" (UniqueName: \"kubernetes.io/projected/98754677-6b19-4bca-abef-380800bf88ac-kube-api-access-dmxqw\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.515236 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.515247 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98754677-6b19-4bca-abef-380800bf88ac-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.886659 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-k65r8" event={"ID":"98754677-6b19-4bca-abef-380800bf88ac","Type":"ContainerDied","Data":"ddae03832bffa2363ed9a6dcd14c543a7d8256423a17cb611f90720a62d02849"} Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.886717 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddae03832bffa2363ed9a6dcd14c543a7d8256423a17cb611f90720a62d02849" Oct 14 09:29:47 crc kubenswrapper[5058]: I1014 09:29:47.886669 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-k65r8" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.001616 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-5dg8j"] Oct 14 09:29:48 crc kubenswrapper[5058]: E1014 09:29:48.002307 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98754677-6b19-4bca-abef-380800bf88ac" containerName="run-os-openstack-openstack-cell1" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.002334 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="98754677-6b19-4bca-abef-380800bf88ac" containerName="run-os-openstack-openstack-cell1" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.002648 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="98754677-6b19-4bca-abef-380800bf88ac" containerName="run-os-openstack-openstack-cell1" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.003579 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.006038 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.028372 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-5dg8j"] Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.127640 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-5dg8j\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.128160 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-inventory\") pod \"reboot-os-openstack-openstack-cell1-5dg8j\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.128413 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vll5d\" (UniqueName: \"kubernetes.io/projected/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-kube-api-access-vll5d\") pod \"reboot-os-openstack-openstack-cell1-5dg8j\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.230137 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vll5d\" (UniqueName: \"kubernetes.io/projected/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-kube-api-access-vll5d\") pod \"reboot-os-openstack-openstack-cell1-5dg8j\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.230237 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-5dg8j\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.230346 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-inventory\") pod \"reboot-os-openstack-openstack-cell1-5dg8j\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.234146 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-inventory\") pod \"reboot-os-openstack-openstack-cell1-5dg8j\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.234382 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-5dg8j\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.254844 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vll5d\" (UniqueName: \"kubernetes.io/projected/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-kube-api-access-vll5d\") pod \"reboot-os-openstack-openstack-cell1-5dg8j\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.329985 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:29:48 crc kubenswrapper[5058]: I1014 09:29:48.920974 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-5dg8j"] Oct 14 09:29:49 crc kubenswrapper[5058]: W1014 09:29:49.206303 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa3e25fa_c06a_408f_9f2a_cce6d3c5026a.slice/crio-dd255dbbc75797ad6e3dedcb3c2569675cdd76a9a9c6bbb30e25f8d52971885c WatchSource:0}: Error finding container dd255dbbc75797ad6e3dedcb3c2569675cdd76a9a9c6bbb30e25f8d52971885c: Status 404 returned error can't find the container with id dd255dbbc75797ad6e3dedcb3c2569675cdd76a9a9c6bbb30e25f8d52971885c Oct 14 09:29:49 crc kubenswrapper[5058]: I1014 09:29:49.936081 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" event={"ID":"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a","Type":"ContainerStarted","Data":"dd255dbbc75797ad6e3dedcb3c2569675cdd76a9a9c6bbb30e25f8d52971885c"} Oct 14 09:29:50 crc kubenswrapper[5058]: I1014 09:29:50.949077 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" event={"ID":"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a","Type":"ContainerStarted","Data":"079de6bfd86fe2d1458e846574323d9011dc9c56fe26ab75f908cc5e9ebcaa2f"} Oct 14 09:29:50 crc kubenswrapper[5058]: I1014 09:29:50.977080 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" podStartSLOduration=3.37350473 podStartE2EDuration="3.977054332s" podCreationTimestamp="2025-10-14 09:29:47 +0000 UTC" firstStartedPulling="2025-10-14 09:29:49.20970547 +0000 UTC m=+9737.120789316" lastFinishedPulling="2025-10-14 09:29:49.813255082 +0000 UTC m=+9737.724338918" observedRunningTime="2025-10-14 09:29:50.974141359 +0000 UTC m=+9738.885225205" watchObservedRunningTime="2025-10-14 09:29:50.977054332 +0000 UTC m=+9738.888138168" Oct 14 09:29:58 crc kubenswrapper[5058]: I1014 09:29:58.044779 5058 generic.go:334] "Generic (PLEG): container finished" podID="dd8f52ac-f61f-4197-be85-6a78e8e27159" containerID="7f01929a1484bd1affe9efae6b2c27b5dd41669c93b575ef6f12e7b275b9356c" exitCode=0 Oct 14 09:29:58 crc kubenswrapper[5058]: I1014 09:29:58.044858 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-n2d66" event={"ID":"dd8f52ac-f61f-4197-be85-6a78e8e27159","Type":"ContainerDied","Data":"7f01929a1484bd1affe9efae6b2c27b5dd41669c93b575ef6f12e7b275b9356c"} Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.633833 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.721642 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9ztd\" (UniqueName: \"kubernetes.io/projected/dd8f52ac-f61f-4197-be85-6a78e8e27159-kube-api-access-g9ztd\") pod \"dd8f52ac-f61f-4197-be85-6a78e8e27159\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.721836 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell2\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell2\") pod \"dd8f52ac-f61f-4197-be85-6a78e8e27159\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.721885 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-1\") pod \"dd8f52ac-f61f-4197-be85-6a78e8e27159\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.721927 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell1\") pod \"dd8f52ac-f61f-4197-be85-6a78e8e27159\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.721999 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-0\") pod \"dd8f52ac-f61f-4197-be85-6a78e8e27159\" (UID: \"dd8f52ac-f61f-4197-be85-6a78e8e27159\") " Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.728734 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8f52ac-f61f-4197-be85-6a78e8e27159-kube-api-access-g9ztd" (OuterVolumeSpecName: "kube-api-access-g9ztd") pod "dd8f52ac-f61f-4197-be85-6a78e8e27159" (UID: "dd8f52ac-f61f-4197-be85-6a78e8e27159"). InnerVolumeSpecName "kube-api-access-g9ztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.754677 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell2" (OuterVolumeSpecName: "ssh-key-openstack-cell2") pod "dd8f52ac-f61f-4197-be85-6a78e8e27159" (UID: "dd8f52ac-f61f-4197-be85-6a78e8e27159"). InnerVolumeSpecName "ssh-key-openstack-cell2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.757205 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-1" (OuterVolumeSpecName: "inventory-1") pod "dd8f52ac-f61f-4197-be85-6a78e8e27159" (UID: "dd8f52ac-f61f-4197-be85-6a78e8e27159"). InnerVolumeSpecName "inventory-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.772722 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "dd8f52ac-f61f-4197-be85-6a78e8e27159" (UID: "dd8f52ac-f61f-4197-be85-6a78e8e27159"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.776102 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "dd8f52ac-f61f-4197-be85-6a78e8e27159" (UID: "dd8f52ac-f61f-4197-be85-6a78e8e27159"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.825415 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9ztd\" (UniqueName: \"kubernetes.io/projected/dd8f52ac-f61f-4197-be85-6a78e8e27159-kube-api-access-g9ztd\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.825456 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell2\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell2\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.825469 5058 reconciler_common.go:293] "Volume detached for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.825483 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Oct 14 09:29:59 crc kubenswrapper[5058]: I1014 09:29:59.825493 5058 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dd8f52ac-f61f-4197-be85-6a78e8e27159-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.085720 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-n2d66" event={"ID":"dd8f52ac-f61f-4197-be85-6a78e8e27159","Type":"ContainerDied","Data":"de169f48506449a62374e5a8496f2b9432ac0689e89eef672a4045dda8210bed"} Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.085769 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de169f48506449a62374e5a8496f2b9432ac0689e89eef672a4045dda8210bed" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.085810 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-n2d66" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.135958 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell2-qqsbb"] Oct 14 09:30:00 crc kubenswrapper[5058]: E1014 09:30:00.136379 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8f52ac-f61f-4197-be85-6a78e8e27159" containerName="ssh-known-hosts-openstack" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.136395 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8f52ac-f61f-4197-be85-6a78e8e27159" containerName="ssh-known-hosts-openstack" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.136614 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8f52ac-f61f-4197-be85-6a78e8e27159" containerName="ssh-known-hosts-openstack" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.137363 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.144482 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.144870 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.158991 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell2-qqsbb"] Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.181588 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp"] Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.184038 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.186416 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.186875 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.202396 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp"] Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.232910 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgsc\" (UniqueName: \"kubernetes.io/projected/b381c5b6-134c-4895-9594-c62154835cbb-kube-api-access-gzgsc\") pod \"run-os-openstack-openstack-cell2-qqsbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.233231 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-ssh-key\") pod \"run-os-openstack-openstack-cell2-qqsbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.233527 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-inventory\") pod \"run-os-openstack-openstack-cell2-qqsbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.336023 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzgsc\" (UniqueName: \"kubernetes.io/projected/b381c5b6-134c-4895-9594-c62154835cbb-kube-api-access-gzgsc\") pod \"run-os-openstack-openstack-cell2-qqsbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.336179 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-ssh-key\") pod \"run-os-openstack-openstack-cell2-qqsbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.336241 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-inventory\") pod \"run-os-openstack-openstack-cell2-qqsbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.336306 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7066df10-e0b6-43a3-ad6e-35614021029d-secret-volume\") pod \"collect-profiles-29340570-z8rdp\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.336374 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7066df10-e0b6-43a3-ad6e-35614021029d-config-volume\") pod \"collect-profiles-29340570-z8rdp\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.336404 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lffxj\" (UniqueName: \"kubernetes.io/projected/7066df10-e0b6-43a3-ad6e-35614021029d-kube-api-access-lffxj\") pod \"collect-profiles-29340570-z8rdp\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.341697 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-ssh-key\") pod \"run-os-openstack-openstack-cell2-qqsbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.346611 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-inventory\") pod \"run-os-openstack-openstack-cell2-qqsbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.358213 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzgsc\" (UniqueName: \"kubernetes.io/projected/b381c5b6-134c-4895-9594-c62154835cbb-kube-api-access-gzgsc\") pod \"run-os-openstack-openstack-cell2-qqsbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.438933 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7066df10-e0b6-43a3-ad6e-35614021029d-secret-volume\") pod \"collect-profiles-29340570-z8rdp\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.439096 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7066df10-e0b6-43a3-ad6e-35614021029d-config-volume\") pod \"collect-profiles-29340570-z8rdp\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.439193 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lffxj\" (UniqueName: \"kubernetes.io/projected/7066df10-e0b6-43a3-ad6e-35614021029d-kube-api-access-lffxj\") pod \"collect-profiles-29340570-z8rdp\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.441953 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7066df10-e0b6-43a3-ad6e-35614021029d-config-volume\") pod \"collect-profiles-29340570-z8rdp\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.447989 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7066df10-e0b6-43a3-ad6e-35614021029d-secret-volume\") pod \"collect-profiles-29340570-z8rdp\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.455544 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.458846 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lffxj\" (UniqueName: \"kubernetes.io/projected/7066df10-e0b6-43a3-ad6e-35614021029d-kube-api-access-lffxj\") pod \"collect-profiles-29340570-z8rdp\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:00 crc kubenswrapper[5058]: I1014 09:30:00.500065 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:01 crc kubenswrapper[5058]: I1014 09:30:01.145001 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp"] Oct 14 09:30:01 crc kubenswrapper[5058]: I1014 09:30:01.154203 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell2-qqsbb"] Oct 14 09:30:02 crc kubenswrapper[5058]: I1014 09:30:02.108819 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell2-qqsbb" event={"ID":"b381c5b6-134c-4895-9594-c62154835cbb","Type":"ContainerStarted","Data":"3aff5f2a8ca94c6959a00fc350b06a6a5d7c4f7274f43c7af1ee53b6e45470e9"} Oct 14 09:30:02 crc kubenswrapper[5058]: I1014 09:30:02.109341 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell2-qqsbb" event={"ID":"b381c5b6-134c-4895-9594-c62154835cbb","Type":"ContainerStarted","Data":"17ba623196c22b7078eb314028423464366a61af4289e37a0736a00aeb86565b"} Oct 14 09:30:02 crc kubenswrapper[5058]: I1014 09:30:02.111515 5058 generic.go:334] "Generic (PLEG): container finished" podID="7066df10-e0b6-43a3-ad6e-35614021029d" containerID="a8d0ad54687ffb2212b31763b5902cbbb9fa64b5c170ada9feac2b464d40c3e6" exitCode=0 Oct 14 09:30:02 crc kubenswrapper[5058]: I1014 09:30:02.111586 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" event={"ID":"7066df10-e0b6-43a3-ad6e-35614021029d","Type":"ContainerDied","Data":"a8d0ad54687ffb2212b31763b5902cbbb9fa64b5c170ada9feac2b464d40c3e6"} Oct 14 09:30:02 crc kubenswrapper[5058]: I1014 09:30:02.111634 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" event={"ID":"7066df10-e0b6-43a3-ad6e-35614021029d","Type":"ContainerStarted","Data":"8420ebdcfaba2b72cc9499685018ea0a9114dff89bd614a7500a768d7e1907c9"} Oct 14 09:30:02 crc kubenswrapper[5058]: I1014 09:30:02.173096 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell2-qqsbb" podStartSLOduration=1.500091073 podStartE2EDuration="2.173076484s" podCreationTimestamp="2025-10-14 09:30:00 +0000 UTC" firstStartedPulling="2025-10-14 09:30:01.158941829 +0000 UTC m=+9749.070025625" lastFinishedPulling="2025-10-14 09:30:01.83192722 +0000 UTC m=+9749.743011036" observedRunningTime="2025-10-14 09:30:02.164732056 +0000 UTC m=+9750.075815872" watchObservedRunningTime="2025-10-14 09:30:02.173076484 +0000 UTC m=+9750.084160300" Oct 14 09:30:03 crc kubenswrapper[5058]: I1014 09:30:03.656214 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:30:03 crc kubenswrapper[5058]: I1014 09:30:03.656642 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:30:03 crc kubenswrapper[5058]: I1014 09:30:03.656713 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:30:03 crc kubenswrapper[5058]: I1014 09:30:03.657948 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c8b0488d0741775b8aef21ce61846f31d6ae906b86145654734d11d1b8a2ec4"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:30:03 crc kubenswrapper[5058]: I1014 09:30:03.658050 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://5c8b0488d0741775b8aef21ce61846f31d6ae906b86145654734d11d1b8a2ec4" gracePeriod=600 Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.018924 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.134141 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.134138 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp" event={"ID":"7066df10-e0b6-43a3-ad6e-35614021029d","Type":"ContainerDied","Data":"8420ebdcfaba2b72cc9499685018ea0a9114dff89bd614a7500a768d7e1907c9"} Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.134541 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8420ebdcfaba2b72cc9499685018ea0a9114dff89bd614a7500a768d7e1907c9" Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.137890 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="5c8b0488d0741775b8aef21ce61846f31d6ae906b86145654734d11d1b8a2ec4" exitCode=0 Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.137912 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"5c8b0488d0741775b8aef21ce61846f31d6ae906b86145654734d11d1b8a2ec4"} Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.137942 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47"} Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.137965 5058 scope.go:117] "RemoveContainer" containerID="0b6fc8c0c7447d60671ca04fc376ecd4b9c8b3b39c1f0b81e6bc3df8ddcb14df" Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.162539 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7066df10-e0b6-43a3-ad6e-35614021029d-config-volume\") pod \"7066df10-e0b6-43a3-ad6e-35614021029d\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.162633 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7066df10-e0b6-43a3-ad6e-35614021029d-secret-volume\") pod \"7066df10-e0b6-43a3-ad6e-35614021029d\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.162962 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lffxj\" (UniqueName: \"kubernetes.io/projected/7066df10-e0b6-43a3-ad6e-35614021029d-kube-api-access-lffxj\") pod \"7066df10-e0b6-43a3-ad6e-35614021029d\" (UID: \"7066df10-e0b6-43a3-ad6e-35614021029d\") " Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.163205 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7066df10-e0b6-43a3-ad6e-35614021029d-config-volume" (OuterVolumeSpecName: "config-volume") pod "7066df10-e0b6-43a3-ad6e-35614021029d" (UID: "7066df10-e0b6-43a3-ad6e-35614021029d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.163991 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7066df10-e0b6-43a3-ad6e-35614021029d-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.171168 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7066df10-e0b6-43a3-ad6e-35614021029d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7066df10-e0b6-43a3-ad6e-35614021029d" (UID: "7066df10-e0b6-43a3-ad6e-35614021029d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.171417 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7066df10-e0b6-43a3-ad6e-35614021029d-kube-api-access-lffxj" (OuterVolumeSpecName: "kube-api-access-lffxj") pod "7066df10-e0b6-43a3-ad6e-35614021029d" (UID: "7066df10-e0b6-43a3-ad6e-35614021029d"). InnerVolumeSpecName "kube-api-access-lffxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.265338 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lffxj\" (UniqueName: \"kubernetes.io/projected/7066df10-e0b6-43a3-ad6e-35614021029d-kube-api-access-lffxj\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:04 crc kubenswrapper[5058]: I1014 09:30:04.265367 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7066df10-e0b6-43a3-ad6e-35614021029d-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:05 crc kubenswrapper[5058]: I1014 09:30:05.121215 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q"] Oct 14 09:30:05 crc kubenswrapper[5058]: I1014 09:30:05.132991 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340525-7gg2q"] Oct 14 09:30:06 crc kubenswrapper[5058]: I1014 09:30:06.813972 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a08084-e1c4-4449-8c5c-c15d016cfa6a" path="/var/lib/kubelet/pods/79a08084-e1c4-4449-8c5c-c15d016cfa6a/volumes" Oct 14 09:30:07 crc kubenswrapper[5058]: I1014 09:30:07.181566 5058 generic.go:334] "Generic (PLEG): container finished" podID="aa3e25fa-c06a-408f-9f2a-cce6d3c5026a" containerID="079de6bfd86fe2d1458e846574323d9011dc9c56fe26ab75f908cc5e9ebcaa2f" exitCode=0 Oct 14 09:30:07 crc kubenswrapper[5058]: I1014 09:30:07.181665 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" event={"ID":"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a","Type":"ContainerDied","Data":"079de6bfd86fe2d1458e846574323d9011dc9c56fe26ab75f908cc5e9ebcaa2f"} Oct 14 09:30:08 crc kubenswrapper[5058]: I1014 09:30:08.826404 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:30:08 crc kubenswrapper[5058]: I1014 09:30:08.992837 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vll5d\" (UniqueName: \"kubernetes.io/projected/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-kube-api-access-vll5d\") pod \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " Oct 14 09:30:08 crc kubenswrapper[5058]: I1014 09:30:08.992888 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-ssh-key\") pod \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " Oct 14 09:30:08 crc kubenswrapper[5058]: I1014 09:30:08.992984 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-inventory\") pod \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\" (UID: \"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a\") " Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.001244 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-kube-api-access-vll5d" (OuterVolumeSpecName: "kube-api-access-vll5d") pod "aa3e25fa-c06a-408f-9f2a-cce6d3c5026a" (UID: "aa3e25fa-c06a-408f-9f2a-cce6d3c5026a"). InnerVolumeSpecName "kube-api-access-vll5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.034204 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aa3e25fa-c06a-408f-9f2a-cce6d3c5026a" (UID: "aa3e25fa-c06a-408f-9f2a-cce6d3c5026a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.047829 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-inventory" (OuterVolumeSpecName: "inventory") pod "aa3e25fa-c06a-408f-9f2a-cce6d3c5026a" (UID: "aa3e25fa-c06a-408f-9f2a-cce6d3c5026a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.095433 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vll5d\" (UniqueName: \"kubernetes.io/projected/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-kube-api-access-vll5d\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.095822 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.095847 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa3e25fa-c06a-408f-9f2a-cce6d3c5026a-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.205451 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" event={"ID":"aa3e25fa-c06a-408f-9f2a-cce6d3c5026a","Type":"ContainerDied","Data":"dd255dbbc75797ad6e3dedcb3c2569675cdd76a9a9c6bbb30e25f8d52971885c"} Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.205485 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd255dbbc75797ad6e3dedcb3c2569675cdd76a9a9c6bbb30e25f8d52971885c" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.205566 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-5dg8j" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.300248 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-spdc7"] Oct 14 09:30:09 crc kubenswrapper[5058]: E1014 09:30:09.300745 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7066df10-e0b6-43a3-ad6e-35614021029d" containerName="collect-profiles" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.300771 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7066df10-e0b6-43a3-ad6e-35614021029d" containerName="collect-profiles" Oct 14 09:30:09 crc kubenswrapper[5058]: E1014 09:30:09.300823 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3e25fa-c06a-408f-9f2a-cce6d3c5026a" containerName="reboot-os-openstack-openstack-cell1" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.300837 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3e25fa-c06a-408f-9f2a-cce6d3c5026a" containerName="reboot-os-openstack-openstack-cell1" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.301192 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3e25fa-c06a-408f-9f2a-cce6d3c5026a" containerName="reboot-os-openstack-openstack-cell1" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.301224 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7066df10-e0b6-43a3-ad6e-35614021029d" containerName="collect-profiles" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.303029 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.307253 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.313552 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.319114 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-spdc7"] Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.402897 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.402960 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.402982 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.403006 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ssh-key\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.403050 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.403078 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txwj5\" (UniqueName: \"kubernetes.io/projected/a16fc909-2fe0-43a2-96b1-ae60b842e473-kube-api-access-txwj5\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.403108 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.403167 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-inventory\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.403308 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.403431 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.403602 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505382 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505476 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505515 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505533 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505556 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ssh-key\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505577 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505604 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txwj5\" (UniqueName: \"kubernetes.io/projected/a16fc909-2fe0-43a2-96b1-ae60b842e473-kube-api-access-txwj5\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505633 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505650 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-inventory\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.505684 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.506458 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.511303 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.511408 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.511509 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ssh-key\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.511991 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.512342 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.514696 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.515006 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.515648 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.515940 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.520583 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-inventory\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.536321 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txwj5\" (UniqueName: \"kubernetes.io/projected/a16fc909-2fe0-43a2-96b1-ae60b842e473-kube-api-access-txwj5\") pod \"install-certs-openstack-openstack-cell1-spdc7\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:09 crc kubenswrapper[5058]: I1014 09:30:09.628626 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:10 crc kubenswrapper[5058]: I1014 09:30:10.294597 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-spdc7"] Oct 14 09:30:11 crc kubenswrapper[5058]: I1014 09:30:11.233129 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-spdc7" event={"ID":"a16fc909-2fe0-43a2-96b1-ae60b842e473","Type":"ContainerStarted","Data":"d0e8de7b049cbccc415bb33d6577850c8afca89336929be317b32f7104e61e1c"} Oct 14 09:30:11 crc kubenswrapper[5058]: I1014 09:30:11.233455 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-spdc7" event={"ID":"a16fc909-2fe0-43a2-96b1-ae60b842e473","Type":"ContainerStarted","Data":"8024478e265531083651ff16057793630e0f9a4f2efb86c43169f4e5380745dc"} Oct 14 09:30:11 crc kubenswrapper[5058]: I1014 09:30:11.253889 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-spdc7" podStartSLOduration=1.75719827 podStartE2EDuration="2.253867716s" podCreationTimestamp="2025-10-14 09:30:09 +0000 UTC" firstStartedPulling="2025-10-14 09:30:10.285382632 +0000 UTC m=+9758.196466478" lastFinishedPulling="2025-10-14 09:30:10.782052088 +0000 UTC m=+9758.693135924" observedRunningTime="2025-10-14 09:30:11.251966342 +0000 UTC m=+9759.163050178" watchObservedRunningTime="2025-10-14 09:30:11.253867716 +0000 UTC m=+9759.164951532" Oct 14 09:30:13 crc kubenswrapper[5058]: I1014 09:30:13.259715 5058 generic.go:334] "Generic (PLEG): container finished" podID="b381c5b6-134c-4895-9594-c62154835cbb" containerID="3aff5f2a8ca94c6959a00fc350b06a6a5d7c4f7274f43c7af1ee53b6e45470e9" exitCode=0 Oct 14 09:30:13 crc kubenswrapper[5058]: I1014 09:30:13.259827 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell2-qqsbb" event={"ID":"b381c5b6-134c-4895-9594-c62154835cbb","Type":"ContainerDied","Data":"3aff5f2a8ca94c6959a00fc350b06a6a5d7c4f7274f43c7af1ee53b6e45470e9"} Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.671637 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.826151 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-ssh-key\") pod \"b381c5b6-134c-4895-9594-c62154835cbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.826214 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-inventory\") pod \"b381c5b6-134c-4895-9594-c62154835cbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.826275 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzgsc\" (UniqueName: \"kubernetes.io/projected/b381c5b6-134c-4895-9594-c62154835cbb-kube-api-access-gzgsc\") pod \"b381c5b6-134c-4895-9594-c62154835cbb\" (UID: \"b381c5b6-134c-4895-9594-c62154835cbb\") " Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.840233 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b381c5b6-134c-4895-9594-c62154835cbb-kube-api-access-gzgsc" (OuterVolumeSpecName: "kube-api-access-gzgsc") pod "b381c5b6-134c-4895-9594-c62154835cbb" (UID: "b381c5b6-134c-4895-9594-c62154835cbb"). InnerVolumeSpecName "kube-api-access-gzgsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.856255 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-inventory" (OuterVolumeSpecName: "inventory") pod "b381c5b6-134c-4895-9594-c62154835cbb" (UID: "b381c5b6-134c-4895-9594-c62154835cbb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.862229 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b381c5b6-134c-4895-9594-c62154835cbb" (UID: "b381c5b6-134c-4895-9594-c62154835cbb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.929908 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.929959 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b381c5b6-134c-4895-9594-c62154835cbb-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:14 crc kubenswrapper[5058]: I1014 09:30:14.929981 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzgsc\" (UniqueName: \"kubernetes.io/projected/b381c5b6-134c-4895-9594-c62154835cbb-kube-api-access-gzgsc\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.285485 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell2-qqsbb" event={"ID":"b381c5b6-134c-4895-9594-c62154835cbb","Type":"ContainerDied","Data":"17ba623196c22b7078eb314028423464366a61af4289e37a0736a00aeb86565b"} Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.285532 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17ba623196c22b7078eb314028423464366a61af4289e37a0736a00aeb86565b" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.285670 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell2-qqsbb" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.377830 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell2-sst57"] Oct 14 09:30:15 crc kubenswrapper[5058]: E1014 09:30:15.378410 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b381c5b6-134c-4895-9594-c62154835cbb" containerName="run-os-openstack-openstack-cell2" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.378432 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b381c5b6-134c-4895-9594-c62154835cbb" containerName="run-os-openstack-openstack-cell2" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.378710 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b381c5b6-134c-4895-9594-c62154835cbb" containerName="run-os-openstack-openstack-cell2" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.379724 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.383845 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.383999 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.398887 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell2-sst57"] Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.541516 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-inventory\") pod \"reboot-os-openstack-openstack-cell2-sst57\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.541642 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cf92\" (UniqueName: \"kubernetes.io/projected/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-kube-api-access-6cf92\") pod \"reboot-os-openstack-openstack-cell2-sst57\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.541836 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-ssh-key\") pod \"reboot-os-openstack-openstack-cell2-sst57\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.645415 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-inventory\") pod \"reboot-os-openstack-openstack-cell2-sst57\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.645497 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cf92\" (UniqueName: \"kubernetes.io/projected/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-kube-api-access-6cf92\") pod \"reboot-os-openstack-openstack-cell2-sst57\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.645757 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-ssh-key\") pod \"reboot-os-openstack-openstack-cell2-sst57\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.654177 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-ssh-key\") pod \"reboot-os-openstack-openstack-cell2-sst57\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.660744 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-inventory\") pod \"reboot-os-openstack-openstack-cell2-sst57\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.667953 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cf92\" (UniqueName: \"kubernetes.io/projected/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-kube-api-access-6cf92\") pod \"reboot-os-openstack-openstack-cell2-sst57\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:15 crc kubenswrapper[5058]: I1014 09:30:15.712859 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:16 crc kubenswrapper[5058]: I1014 09:30:16.327886 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell2-sst57"] Oct 14 09:30:16 crc kubenswrapper[5058]: W1014 09:30:16.905021 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0d0e557_c74b_47f3_9089_4ff59ca5ab96.slice/crio-5c53aaebf515c117aaaf8a9a8d34c97c172188ef605188283569fa5af2241605 WatchSource:0}: Error finding container 5c53aaebf515c117aaaf8a9a8d34c97c172188ef605188283569fa5af2241605: Status 404 returned error can't find the container with id 5c53aaebf515c117aaaf8a9a8d34c97c172188ef605188283569fa5af2241605 Oct 14 09:30:17 crc kubenswrapper[5058]: I1014 09:30:17.308781 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell2-sst57" event={"ID":"b0d0e557-c74b-47f3-9089-4ff59ca5ab96","Type":"ContainerStarted","Data":"5c53aaebf515c117aaaf8a9a8d34c97c172188ef605188283569fa5af2241605"} Oct 14 09:30:18 crc kubenswrapper[5058]: I1014 09:30:18.325014 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell2-sst57" event={"ID":"b0d0e557-c74b-47f3-9089-4ff59ca5ab96","Type":"ContainerStarted","Data":"a43e46da66c7a3b2e41ca462236c61627340e7e8ced3c2f995390add0af52f67"} Oct 14 09:30:18 crc kubenswrapper[5058]: I1014 09:30:18.348163 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell2-sst57" podStartSLOduration=2.610633467 podStartE2EDuration="3.348146318s" podCreationTimestamp="2025-10-14 09:30:15 +0000 UTC" firstStartedPulling="2025-10-14 09:30:16.90972508 +0000 UTC m=+9764.820808926" lastFinishedPulling="2025-10-14 09:30:17.647237961 +0000 UTC m=+9765.558321777" observedRunningTime="2025-10-14 09:30:18.344412002 +0000 UTC m=+9766.255495878" watchObservedRunningTime="2025-10-14 09:30:18.348146318 +0000 UTC m=+9766.259230124" Oct 14 09:30:34 crc kubenswrapper[5058]: I1014 09:30:34.543470 5058 generic.go:334] "Generic (PLEG): container finished" podID="b0d0e557-c74b-47f3-9089-4ff59ca5ab96" containerID="a43e46da66c7a3b2e41ca462236c61627340e7e8ced3c2f995390add0af52f67" exitCode=0 Oct 14 09:30:34 crc kubenswrapper[5058]: I1014 09:30:34.543599 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell2-sst57" event={"ID":"b0d0e557-c74b-47f3-9089-4ff59ca5ab96","Type":"ContainerDied","Data":"a43e46da66c7a3b2e41ca462236c61627340e7e8ced3c2f995390add0af52f67"} Oct 14 09:30:35 crc kubenswrapper[5058]: I1014 09:30:35.565703 5058 generic.go:334] "Generic (PLEG): container finished" podID="a16fc909-2fe0-43a2-96b1-ae60b842e473" containerID="d0e8de7b049cbccc415bb33d6577850c8afca89336929be317b32f7104e61e1c" exitCode=0 Oct 14 09:30:35 crc kubenswrapper[5058]: I1014 09:30:35.565779 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-spdc7" event={"ID":"a16fc909-2fe0-43a2-96b1-ae60b842e473","Type":"ContainerDied","Data":"d0e8de7b049cbccc415bb33d6577850c8afca89336929be317b32f7104e61e1c"} Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.128350 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.181049 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-ssh-key\") pod \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.181217 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-inventory\") pod \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.181309 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cf92\" (UniqueName: \"kubernetes.io/projected/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-kube-api-access-6cf92\") pod \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\" (UID: \"b0d0e557-c74b-47f3-9089-4ff59ca5ab96\") " Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.196968 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-kube-api-access-6cf92" (OuterVolumeSpecName: "kube-api-access-6cf92") pod "b0d0e557-c74b-47f3-9089-4ff59ca5ab96" (UID: "b0d0e557-c74b-47f3-9089-4ff59ca5ab96"). InnerVolumeSpecName "kube-api-access-6cf92". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.219821 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-inventory" (OuterVolumeSpecName: "inventory") pod "b0d0e557-c74b-47f3-9089-4ff59ca5ab96" (UID: "b0d0e557-c74b-47f3-9089-4ff59ca5ab96"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.247579 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b0d0e557-c74b-47f3-9089-4ff59ca5ab96" (UID: "b0d0e557-c74b-47f3-9089-4ff59ca5ab96"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.284713 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.284766 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.284787 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cf92\" (UniqueName: \"kubernetes.io/projected/b0d0e557-c74b-47f3-9089-4ff59ca5ab96-kube-api-access-6cf92\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.584576 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell2-sst57" event={"ID":"b0d0e557-c74b-47f3-9089-4ff59ca5ab96","Type":"ContainerDied","Data":"5c53aaebf515c117aaaf8a9a8d34c97c172188ef605188283569fa5af2241605"} Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.584651 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c53aaebf515c117aaaf8a9a8d34c97c172188ef605188283569fa5af2241605" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.584610 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell2-sst57" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.711308 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell2-nkr9n"] Oct 14 09:30:36 crc kubenswrapper[5058]: E1014 09:30:36.712074 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d0e557-c74b-47f3-9089-4ff59ca5ab96" containerName="reboot-os-openstack-openstack-cell2" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.712095 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d0e557-c74b-47f3-9089-4ff59ca5ab96" containerName="reboot-os-openstack-openstack-cell2" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.712392 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d0e557-c74b-47f3-9089-4ff59ca5ab96" containerName="reboot-os-openstack-openstack-cell2" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.718185 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.720498 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.726906 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.727341 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell2-nkr9n"] Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795232 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795309 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795379 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795401 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbj7d\" (UniqueName: \"kubernetes.io/projected/103b3718-026c-4c76-b31a-9514db89eb0d-kube-api-access-sbj7d\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795448 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795477 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795503 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-inventory\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795556 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795577 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795629 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.795658 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ssh-key\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: E1014 09:30:36.884643 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0d0e557_c74b_47f3_9089_4ff59ca5ab96.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0d0e557_c74b_47f3_9089_4ff59ca5ab96.slice/crio-5c53aaebf515c117aaaf8a9a8d34c97c172188ef605188283569fa5af2241605\": RecentStats: unable to find data in memory cache]" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.897965 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.898047 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.898783 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.898863 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbj7d\" (UniqueName: \"kubernetes.io/projected/103b3718-026c-4c76-b31a-9514db89eb0d-kube-api-access-sbj7d\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.899346 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.899372 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.899425 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-inventory\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.900088 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.900154 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.900308 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.903922 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ssh-key\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.915746 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ssh-key\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.915883 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.915998 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.916197 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-inventory\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.916460 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.916980 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.920520 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbj7d\" (UniqueName: \"kubernetes.io/projected/103b3718-026c-4c76-b31a-9514db89eb0d-kube-api-access-sbj7d\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.918941 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.926599 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.926715 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:36 crc kubenswrapper[5058]: I1014 09:30:36.931822 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell2-nkr9n\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.047619 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.212594 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.314900 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-inventory\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.314959 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-libvirt-combined-ca-bundle\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.315024 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ssh-key\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.315065 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-metadata-combined-ca-bundle\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.315097 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-telemetry-combined-ca-bundle\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.315143 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-nova-combined-ca-bundle\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.315181 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txwj5\" (UniqueName: \"kubernetes.io/projected/a16fc909-2fe0-43a2-96b1-ae60b842e473-kube-api-access-txwj5\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.315261 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-bootstrap-combined-ca-bundle\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.315339 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-dhcp-combined-ca-bundle\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.315444 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-sriov-combined-ca-bundle\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.315487 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ovn-combined-ca-bundle\") pod \"a16fc909-2fe0-43a2-96b1-ae60b842e473\" (UID: \"a16fc909-2fe0-43a2-96b1-ae60b842e473\") " Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.321609 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.321700 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.322285 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.322712 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.325267 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.325411 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a16fc909-2fe0-43a2-96b1-ae60b842e473-kube-api-access-txwj5" (OuterVolumeSpecName: "kube-api-access-txwj5") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "kube-api-access-txwj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.325579 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.328747 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.331977 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.350829 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-inventory" (OuterVolumeSpecName: "inventory") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.362159 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a16fc909-2fe0-43a2-96b1-ae60b842e473" (UID: "a16fc909-2fe0-43a2-96b1-ae60b842e473"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.417960 5058 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.417999 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.418009 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.418021 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.418032 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.418040 5058 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.418048 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.418056 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.418065 5058 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.418074 5058 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16fc909-2fe0-43a2-96b1-ae60b842e473-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.418082 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txwj5\" (UniqueName: \"kubernetes.io/projected/a16fc909-2fe0-43a2-96b1-ae60b842e473-kube-api-access-txwj5\") on node \"crc\" DevicePath \"\"" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.597263 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-spdc7" event={"ID":"a16fc909-2fe0-43a2-96b1-ae60b842e473","Type":"ContainerDied","Data":"8024478e265531083651ff16057793630e0f9a4f2efb86c43169f4e5380745dc"} Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.597506 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8024478e265531083651ff16057793630e0f9a4f2efb86c43169f4e5380745dc" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.597563 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-spdc7" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.626863 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell2-nkr9n"] Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.692720 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-9gfxk"] Oct 14 09:30:37 crc kubenswrapper[5058]: E1014 09:30:37.693204 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16fc909-2fe0-43a2-96b1-ae60b842e473" containerName="install-certs-openstack-openstack-cell1" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.693224 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16fc909-2fe0-43a2-96b1-ae60b842e473" containerName="install-certs-openstack-openstack-cell1" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.693457 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16fc909-2fe0-43a2-96b1-ae60b842e473" containerName="install-certs-openstack-openstack-cell1" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.694272 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.700277 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-9gfxk"] Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.700383 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.700456 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.700760 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.825925 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-inventory\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.826147 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.826208 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.826283 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ssh-key\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.826504 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx58c\" (UniqueName: \"kubernetes.io/projected/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-kube-api-access-zx58c\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.928897 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.928982 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.929063 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ssh-key\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.929104 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx58c\" (UniqueName: \"kubernetes.io/projected/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-kube-api-access-zx58c\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.929146 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-inventory\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.930372 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.935364 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ssh-key\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.935772 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-inventory\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.939298 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:37 crc kubenswrapper[5058]: I1014 09:30:37.962861 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx58c\" (UniqueName: \"kubernetes.io/projected/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-kube-api-access-zx58c\") pod \"ovn-openstack-openstack-cell1-9gfxk\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:38 crc kubenswrapper[5058]: I1014 09:30:38.014946 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:30:38 crc kubenswrapper[5058]: I1014 09:30:38.610153 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" event={"ID":"103b3718-026c-4c76-b31a-9514db89eb0d","Type":"ContainerStarted","Data":"28e503abe2f0a2d07699e61544e81ddd7bc3e0dd4a258ad3ad0ed9b9de286cf8"} Oct 14 09:30:38 crc kubenswrapper[5058]: I1014 09:30:38.610955 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" event={"ID":"103b3718-026c-4c76-b31a-9514db89eb0d","Type":"ContainerStarted","Data":"0c808930255ec33b1235a3628246abfb9c6a466c4ad3ffbeb0bc3799f2c51ed0"} Oct 14 09:30:38 crc kubenswrapper[5058]: I1014 09:30:38.648013 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" podStartSLOduration=2.110611961 podStartE2EDuration="2.647989588s" podCreationTimestamp="2025-10-14 09:30:36 +0000 UTC" firstStartedPulling="2025-10-14 09:30:37.635352795 +0000 UTC m=+9785.546436611" lastFinishedPulling="2025-10-14 09:30:38.172730402 +0000 UTC m=+9786.083814238" observedRunningTime="2025-10-14 09:30:38.634021579 +0000 UTC m=+9786.545105405" watchObservedRunningTime="2025-10-14 09:30:38.647989588 +0000 UTC m=+9786.559073404" Oct 14 09:30:38 crc kubenswrapper[5058]: I1014 09:30:38.667118 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-9gfxk"] Oct 14 09:30:38 crc kubenswrapper[5058]: W1014 09:30:38.670015 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b07bed_d3c4_40f7_bbb8_4d487bea3c5c.slice/crio-fedc9e94bda9877a081606db82554a0c1b9ab058675b336092c6e6554d9b1409 WatchSource:0}: Error finding container fedc9e94bda9877a081606db82554a0c1b9ab058675b336092c6e6554d9b1409: Status 404 returned error can't find the container with id fedc9e94bda9877a081606db82554a0c1b9ab058675b336092c6e6554d9b1409 Oct 14 09:30:39 crc kubenswrapper[5058]: I1014 09:30:39.623609 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-9gfxk" event={"ID":"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c","Type":"ContainerStarted","Data":"fedc9e94bda9877a081606db82554a0c1b9ab058675b336092c6e6554d9b1409"} Oct 14 09:30:40 crc kubenswrapper[5058]: I1014 09:30:40.639704 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-9gfxk" event={"ID":"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c","Type":"ContainerStarted","Data":"9ebc0bcc20da7ae43aa71934a62c73879d54f19dca03840a6f7b02259228c91a"} Oct 14 09:30:40 crc kubenswrapper[5058]: I1014 09:30:40.658096 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-9gfxk" podStartSLOduration=2.532650643 podStartE2EDuration="3.65808111s" podCreationTimestamp="2025-10-14 09:30:37 +0000 UTC" firstStartedPulling="2025-10-14 09:30:38.672628 +0000 UTC m=+9786.583711806" lastFinishedPulling="2025-10-14 09:30:39.798058467 +0000 UTC m=+9787.709142273" observedRunningTime="2025-10-14 09:30:40.656511155 +0000 UTC m=+9788.567594961" watchObservedRunningTime="2025-10-14 09:30:40.65808111 +0000 UTC m=+9788.569164916" Oct 14 09:30:49 crc kubenswrapper[5058]: I1014 09:30:49.052017 5058 scope.go:117] "RemoveContainer" containerID="90695985ee0c5bc379f7d9ec3b769db298b00ecf5c6250dff0669d07031968ba" Oct 14 09:30:59 crc kubenswrapper[5058]: I1014 09:30:59.883902 5058 generic.go:334] "Generic (PLEG): container finished" podID="103b3718-026c-4c76-b31a-9514db89eb0d" containerID="28e503abe2f0a2d07699e61544e81ddd7bc3e0dd4a258ad3ad0ed9b9de286cf8" exitCode=0 Oct 14 09:30:59 crc kubenswrapper[5058]: I1014 09:30:59.883977 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" event={"ID":"103b3718-026c-4c76-b31a-9514db89eb0d","Type":"ContainerDied","Data":"28e503abe2f0a2d07699e61544e81ddd7bc3e0dd4a258ad3ad0ed9b9de286cf8"} Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.491629 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.619693 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-sriov-combined-ca-bundle\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.619910 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-bootstrap-combined-ca-bundle\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.619994 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ovn-combined-ca-bundle\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.620091 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-libvirt-combined-ca-bundle\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.620156 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-dhcp-combined-ca-bundle\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.620233 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-metadata-combined-ca-bundle\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.620290 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbj7d\" (UniqueName: \"kubernetes.io/projected/103b3718-026c-4c76-b31a-9514db89eb0d-kube-api-access-sbj7d\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.620326 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-nova-combined-ca-bundle\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.620416 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-telemetry-combined-ca-bundle\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.620470 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-inventory\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.620611 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ssh-key\") pod \"103b3718-026c-4c76-b31a-9514db89eb0d\" (UID: \"103b3718-026c-4c76-b31a-9514db89eb0d\") " Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.626527 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.626573 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.626675 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103b3718-026c-4c76-b31a-9514db89eb0d-kube-api-access-sbj7d" (OuterVolumeSpecName: "kube-api-access-sbj7d") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "kube-api-access-sbj7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.627353 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.627499 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.628928 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.631000 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.631030 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.631061 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.651148 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.685071 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-inventory" (OuterVolumeSpecName: "inventory") pod "103b3718-026c-4c76-b31a-9514db89eb0d" (UID: "103b3718-026c-4c76-b31a-9514db89eb0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.723861 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.723915 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.723938 5058 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.723962 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.723980 5058 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.723998 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.724016 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.724036 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbj7d\" (UniqueName: \"kubernetes.io/projected/103b3718-026c-4c76-b31a-9514db89eb0d-kube-api-access-sbj7d\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.724056 5058 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.724075 5058 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.724094 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103b3718-026c-4c76-b31a-9514db89eb0d-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.912403 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" event={"ID":"103b3718-026c-4c76-b31a-9514db89eb0d","Type":"ContainerDied","Data":"0c808930255ec33b1235a3628246abfb9c6a466c4ad3ffbeb0bc3799f2c51ed0"} Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.912465 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c808930255ec33b1235a3628246abfb9c6a466c4ad3ffbeb0bc3799f2c51ed0" Oct 14 09:31:01 crc kubenswrapper[5058]: I1014 09:31:01.912485 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell2-nkr9n" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.076502 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell2-lsmm7"] Oct 14 09:31:02 crc kubenswrapper[5058]: E1014 09:31:02.077172 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103b3718-026c-4c76-b31a-9514db89eb0d" containerName="install-certs-openstack-openstack-cell2" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.077204 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="103b3718-026c-4c76-b31a-9514db89eb0d" containerName="install-certs-openstack-openstack-cell2" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.077526 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="103b3718-026c-4c76-b31a-9514db89eb0d" containerName="install-certs-openstack-openstack-cell2" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.078780 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.082024 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.082346 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.089257 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell2-lsmm7"] Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.235687 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b5d4\" (UniqueName: \"kubernetes.io/projected/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-kube-api-access-6b5d4\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.236193 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.236291 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ssh-key\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.236452 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-inventory\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.236523 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.338898 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-inventory\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.338968 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.339107 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b5d4\" (UniqueName: \"kubernetes.io/projected/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-kube-api-access-6b5d4\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.339188 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.339222 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ssh-key\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.340279 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.345742 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-inventory\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.346509 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ssh-key\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.348163 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.361134 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b5d4\" (UniqueName: \"kubernetes.io/projected/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-kube-api-access-6b5d4\") pod \"ovn-openstack-openstack-cell2-lsmm7\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:02 crc kubenswrapper[5058]: I1014 09:31:02.396552 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:31:03 crc kubenswrapper[5058]: I1014 09:31:03.009031 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell2-lsmm7"] Oct 14 09:31:03 crc kubenswrapper[5058]: I1014 09:31:03.966314 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell2-lsmm7" event={"ID":"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8","Type":"ContainerStarted","Data":"2b9cd905962e85d049a87fd5bc171aa308918cd145d8dd3071e9cf61e6155545"} Oct 14 09:31:03 crc kubenswrapper[5058]: I1014 09:31:03.966673 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell2-lsmm7" event={"ID":"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8","Type":"ContainerStarted","Data":"408787e4f01eed074f32360a48bcec000a3cb0a00d620c80f3d55738e9273b76"} Oct 14 09:31:03 crc kubenswrapper[5058]: I1014 09:31:03.996072 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell2-lsmm7" podStartSLOduration=1.486259102 podStartE2EDuration="1.996051732s" podCreationTimestamp="2025-10-14 09:31:02 +0000 UTC" firstStartedPulling="2025-10-14 09:31:03.012250621 +0000 UTC m=+9810.923334427" lastFinishedPulling="2025-10-14 09:31:03.522043221 +0000 UTC m=+9811.433127057" observedRunningTime="2025-10-14 09:31:03.987290902 +0000 UTC m=+9811.898374748" watchObservedRunningTime="2025-10-14 09:31:03.996051732 +0000 UTC m=+9811.907135548" Oct 14 09:31:52 crc kubenswrapper[5058]: I1014 09:31:52.556531 5058 generic.go:334] "Generic (PLEG): container finished" podID="41b07bed-d3c4-40f7-bbb8-4d487bea3c5c" containerID="9ebc0bcc20da7ae43aa71934a62c73879d54f19dca03840a6f7b02259228c91a" exitCode=0 Oct 14 09:31:52 crc kubenswrapper[5058]: I1014 09:31:52.556761 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-9gfxk" event={"ID":"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c","Type":"ContainerDied","Data":"9ebc0bcc20da7ae43aa71934a62c73879d54f19dca03840a6f7b02259228c91a"} Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.124033 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.208604 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx58c\" (UniqueName: \"kubernetes.io/projected/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-kube-api-access-zx58c\") pod \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.208679 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovncontroller-config-0\") pod \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.208734 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-inventory\") pod \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.208763 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovn-combined-ca-bundle\") pod \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.208809 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ssh-key\") pod \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\" (UID: \"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c\") " Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.218174 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-kube-api-access-zx58c" (OuterVolumeSpecName: "kube-api-access-zx58c") pod "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c" (UID: "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c"). InnerVolumeSpecName "kube-api-access-zx58c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.218208 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c" (UID: "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.260853 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-inventory" (OuterVolumeSpecName: "inventory") pod "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c" (UID: "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.261610 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c" (UID: "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.262021 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c" (UID: "41b07bed-d3c4-40f7-bbb8-4d487bea3c5c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.310790 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.310855 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx58c\" (UniqueName: \"kubernetes.io/projected/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-kube-api-access-zx58c\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.310877 5058 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.310896 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.310914 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b07bed-d3c4-40f7-bbb8-4d487bea3c5c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.602621 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-9gfxk" event={"ID":"41b07bed-d3c4-40f7-bbb8-4d487bea3c5c","Type":"ContainerDied","Data":"fedc9e94bda9877a081606db82554a0c1b9ab058675b336092c6e6554d9b1409"} Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.602680 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fedc9e94bda9877a081606db82554a0c1b9ab058675b336092c6e6554d9b1409" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.603213 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-9gfxk" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.741240 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-tzszb"] Oct 14 09:31:54 crc kubenswrapper[5058]: E1014 09:31:54.742492 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b07bed-d3c4-40f7-bbb8-4d487bea3c5c" containerName="ovn-openstack-openstack-cell1" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.742819 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b07bed-d3c4-40f7-bbb8-4d487bea3c5c" containerName="ovn-openstack-openstack-cell1" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.743612 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b07bed-d3c4-40f7-bbb8-4d487bea3c5c" containerName="ovn-openstack-openstack-cell1" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.745734 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.750468 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.750536 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.750492 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.750678 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-tzszb"] Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.751209 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.827929 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.828648 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.828851 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt292\" (UniqueName: \"kubernetes.io/projected/9c1079ae-4815-4773-a5e3-84e532ed66db-kube-api-access-nt292\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.829155 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.829216 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.829337 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.931173 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.931375 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.931445 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt292\" (UniqueName: \"kubernetes.io/projected/9c1079ae-4815-4773-a5e3-84e532ed66db-kube-api-access-nt292\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.931520 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.931563 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.931657 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.935956 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.936288 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.939699 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.940187 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.941085 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:54 crc kubenswrapper[5058]: I1014 09:31:54.961578 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt292\" (UniqueName: \"kubernetes.io/projected/9c1079ae-4815-4773-a5e3-84e532ed66db-kube-api-access-nt292\") pod \"neutron-metadata-openstack-openstack-cell1-tzszb\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:55 crc kubenswrapper[5058]: I1014 09:31:55.123403 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:31:55 crc kubenswrapper[5058]: I1014 09:31:55.742582 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:31:55 crc kubenswrapper[5058]: I1014 09:31:55.745633 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-tzszb"] Oct 14 09:31:56 crc kubenswrapper[5058]: I1014 09:31:56.628313 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" event={"ID":"9c1079ae-4815-4773-a5e3-84e532ed66db","Type":"ContainerStarted","Data":"604601dd1a762ea0c1e32a4f92c90078c6d083213b2def9b50d2a148296ddbfe"} Oct 14 09:31:57 crc kubenswrapper[5058]: I1014 09:31:57.649766 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" event={"ID":"9c1079ae-4815-4773-a5e3-84e532ed66db","Type":"ContainerStarted","Data":"52d8093784210e6ed01add8dda0656b1151192f42fc9c65ca1e4758a7d1ada5e"} Oct 14 09:31:57 crc kubenswrapper[5058]: I1014 09:31:57.696875 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" podStartSLOduration=2.945964358 podStartE2EDuration="3.696848639s" podCreationTimestamp="2025-10-14 09:31:54 +0000 UTC" firstStartedPulling="2025-10-14 09:31:55.742369613 +0000 UTC m=+9863.653453419" lastFinishedPulling="2025-10-14 09:31:56.493253884 +0000 UTC m=+9864.404337700" observedRunningTime="2025-10-14 09:31:57.673574436 +0000 UTC m=+9865.584658262" watchObservedRunningTime="2025-10-14 09:31:57.696848639 +0000 UTC m=+9865.607932455" Oct 14 09:32:19 crc kubenswrapper[5058]: I1014 09:32:19.913169 5058 generic.go:334] "Generic (PLEG): container finished" podID="b6c2dca5-aca4-43a8-afd2-005e5c71f8e8" containerID="2b9cd905962e85d049a87fd5bc171aa308918cd145d8dd3071e9cf61e6155545" exitCode=0 Oct 14 09:32:19 crc kubenswrapper[5058]: I1014 09:32:19.913260 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell2-lsmm7" event={"ID":"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8","Type":"ContainerDied","Data":"2b9cd905962e85d049a87fd5bc171aa308918cd145d8dd3071e9cf61e6155545"} Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.478855 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.506098 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b5d4\" (UniqueName: \"kubernetes.io/projected/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-kube-api-access-6b5d4\") pod \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.506157 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ssh-key\") pod \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.506278 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-inventory\") pod \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.506308 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovncontroller-config-0\") pod \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.506376 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovn-combined-ca-bundle\") pod \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\" (UID: \"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8\") " Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.518968 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8" (UID: "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.522583 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-kube-api-access-6b5d4" (OuterVolumeSpecName: "kube-api-access-6b5d4") pod "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8" (UID: "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8"). InnerVolumeSpecName "kube-api-access-6b5d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.548053 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-inventory" (OuterVolumeSpecName: "inventory") pod "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8" (UID: "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.566689 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8" (UID: "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.573064 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8" (UID: "b6c2dca5-aca4-43a8-afd2-005e5c71f8e8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.609566 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.609604 5058 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.609620 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.609632 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b5d4\" (UniqueName: \"kubernetes.io/projected/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-kube-api-access-6b5d4\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.609643 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6c2dca5-aca4-43a8-afd2-005e5c71f8e8-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.942414 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell2-lsmm7" event={"ID":"b6c2dca5-aca4-43a8-afd2-005e5c71f8e8","Type":"ContainerDied","Data":"408787e4f01eed074f32360a48bcec000a3cb0a00d620c80f3d55738e9273b76"} Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.942497 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="408787e4f01eed074f32360a48bcec000a3cb0a00d620c80f3d55738e9273b76" Oct 14 09:32:21 crc kubenswrapper[5058]: I1014 09:32:21.942498 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell2-lsmm7" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.080117 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell2-fwvnf"] Oct 14 09:32:22 crc kubenswrapper[5058]: E1014 09:32:22.080628 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c2dca5-aca4-43a8-afd2-005e5c71f8e8" containerName="ovn-openstack-openstack-cell2" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.080646 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c2dca5-aca4-43a8-afd2-005e5c71f8e8" containerName="ovn-openstack-openstack-cell2" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.080878 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c2dca5-aca4-43a8-afd2-005e5c71f8e8" containerName="ovn-openstack-openstack-cell2" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.081624 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.086111 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.087874 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.093512 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell2-fwvnf"] Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.116609 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.116993 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.117052 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v64f\" (UniqueName: \"kubernetes.io/projected/f168337a-a9d3-4bb4-b7ef-306370917b3f-kube-api-access-8v64f\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.117116 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.117140 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-inventory\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.117189 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.218394 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v64f\" (UniqueName: \"kubernetes.io/projected/f168337a-a9d3-4bb4-b7ef-306370917b3f-kube-api-access-8v64f\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.218482 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.218502 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-inventory\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.218530 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.218615 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.218647 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.223650 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.223759 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.224898 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.228072 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.229085 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-inventory\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.234414 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v64f\" (UniqueName: \"kubernetes.io/projected/f168337a-a9d3-4bb4-b7ef-306370917b3f-kube-api-access-8v64f\") pod \"neutron-metadata-openstack-openstack-cell2-fwvnf\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:22 crc kubenswrapper[5058]: I1014 09:32:22.411289 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:32:23 crc kubenswrapper[5058]: I1014 09:32:23.117640 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell2-fwvnf"] Oct 14 09:32:23 crc kubenswrapper[5058]: I1014 09:32:23.968344 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" event={"ID":"f168337a-a9d3-4bb4-b7ef-306370917b3f","Type":"ContainerStarted","Data":"d06931067cb3fa83ce4dc93e235bcf640e4b73f2864e6801d7eaaf362474d7e5"} Oct 14 09:32:24 crc kubenswrapper[5058]: I1014 09:32:24.988179 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" event={"ID":"f168337a-a9d3-4bb4-b7ef-306370917b3f","Type":"ContainerStarted","Data":"c62a3391f2db78cc0f9c28552dce83773a11649866ea2962bf15fee5bea9eaa6"} Oct 14 09:32:25 crc kubenswrapper[5058]: I1014 09:32:25.027731 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" podStartSLOduration=2.432299972 podStartE2EDuration="3.027706826s" podCreationTimestamp="2025-10-14 09:32:22 +0000 UTC" firstStartedPulling="2025-10-14 09:32:23.121695958 +0000 UTC m=+9891.032779804" lastFinishedPulling="2025-10-14 09:32:23.717102822 +0000 UTC m=+9891.628186658" observedRunningTime="2025-10-14 09:32:25.019623166 +0000 UTC m=+9892.930707042" watchObservedRunningTime="2025-10-14 09:32:25.027706826 +0000 UTC m=+9892.938790642" Oct 14 09:32:33 crc kubenswrapper[5058]: I1014 09:32:33.656339 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:32:33 crc kubenswrapper[5058]: I1014 09:32:33.656801 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:32:53 crc kubenswrapper[5058]: I1014 09:32:53.341496 5058 generic.go:334] "Generic (PLEG): container finished" podID="9c1079ae-4815-4773-a5e3-84e532ed66db" containerID="52d8093784210e6ed01add8dda0656b1151192f42fc9c65ca1e4758a7d1ada5e" exitCode=0 Oct 14 09:32:53 crc kubenswrapper[5058]: I1014 09:32:53.341589 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" event={"ID":"9c1079ae-4815-4773-a5e3-84e532ed66db","Type":"ContainerDied","Data":"52d8093784210e6ed01add8dda0656b1151192f42fc9c65ca1e4758a7d1ada5e"} Oct 14 09:32:54 crc kubenswrapper[5058]: I1014 09:32:54.922461 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.074293 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-metadata-combined-ca-bundle\") pod \"9c1079ae-4815-4773-a5e3-84e532ed66db\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.074376 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-nova-metadata-neutron-config-0\") pod \"9c1079ae-4815-4773-a5e3-84e532ed66db\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.074441 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-ovn-metadata-agent-neutron-config-0\") pod \"9c1079ae-4815-4773-a5e3-84e532ed66db\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.074514 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt292\" (UniqueName: \"kubernetes.io/projected/9c1079ae-4815-4773-a5e3-84e532ed66db-kube-api-access-nt292\") pod \"9c1079ae-4815-4773-a5e3-84e532ed66db\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.074584 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-inventory\") pod \"9c1079ae-4815-4773-a5e3-84e532ed66db\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.074711 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-ssh-key\") pod \"9c1079ae-4815-4773-a5e3-84e532ed66db\" (UID: \"9c1079ae-4815-4773-a5e3-84e532ed66db\") " Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.080426 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c1079ae-4815-4773-a5e3-84e532ed66db-kube-api-access-nt292" (OuterVolumeSpecName: "kube-api-access-nt292") pod "9c1079ae-4815-4773-a5e3-84e532ed66db" (UID: "9c1079ae-4815-4773-a5e3-84e532ed66db"). InnerVolumeSpecName "kube-api-access-nt292". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.080517 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9c1079ae-4815-4773-a5e3-84e532ed66db" (UID: "9c1079ae-4815-4773-a5e3-84e532ed66db"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.108925 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-inventory" (OuterVolumeSpecName: "inventory") pod "9c1079ae-4815-4773-a5e3-84e532ed66db" (UID: "9c1079ae-4815-4773-a5e3-84e532ed66db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.113124 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9c1079ae-4815-4773-a5e3-84e532ed66db" (UID: "9c1079ae-4815-4773-a5e3-84e532ed66db"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.114380 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "9c1079ae-4815-4773-a5e3-84e532ed66db" (UID: "9c1079ae-4815-4773-a5e3-84e532ed66db"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.126654 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "9c1079ae-4815-4773-a5e3-84e532ed66db" (UID: "9c1079ae-4815-4773-a5e3-84e532ed66db"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.177190 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.177224 5058 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.177236 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.177246 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt292\" (UniqueName: \"kubernetes.io/projected/9c1079ae-4815-4773-a5e3-84e532ed66db-kube-api-access-nt292\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.177257 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.177267 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c1079ae-4815-4773-a5e3-84e532ed66db-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.369208 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" event={"ID":"9c1079ae-4815-4773-a5e3-84e532ed66db","Type":"ContainerDied","Data":"604601dd1a762ea0c1e32a4f92c90078c6d083213b2def9b50d2a148296ddbfe"} Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.369278 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-tzszb" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.369304 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604601dd1a762ea0c1e32a4f92c90078c6d083213b2def9b50d2a148296ddbfe" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.619220 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-7wfsc"] Oct 14 09:32:55 crc kubenswrapper[5058]: E1014 09:32:55.619987 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c1079ae-4815-4773-a5e3-84e532ed66db" containerName="neutron-metadata-openstack-openstack-cell1" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.620012 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c1079ae-4815-4773-a5e3-84e532ed66db" containerName="neutron-metadata-openstack-openstack-cell1" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.620221 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c1079ae-4815-4773-a5e3-84e532ed66db" containerName="neutron-metadata-openstack-openstack-cell1" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.621021 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.622957 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.623183 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.623908 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.633607 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-7wfsc"] Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.691437 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.691497 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.691525 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-ssh-key\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.691608 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltrz5\" (UniqueName: \"kubernetes.io/projected/7e70a9ec-db2c-4945-823a-1392efdabd0e-kube-api-access-ltrz5\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.691632 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-inventory\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.792992 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltrz5\" (UniqueName: \"kubernetes.io/projected/7e70a9ec-db2c-4945-823a-1392efdabd0e-kube-api-access-ltrz5\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.793045 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-inventory\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.793142 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.793162 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.793184 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-ssh-key\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.797363 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.797588 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.797676 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-inventory\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.802698 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-ssh-key\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.811505 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltrz5\" (UniqueName: \"kubernetes.io/projected/7e70a9ec-db2c-4945-823a-1392efdabd0e-kube-api-access-ltrz5\") pod \"libvirt-openstack-openstack-cell1-7wfsc\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:55 crc kubenswrapper[5058]: I1014 09:32:55.946819 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:32:56 crc kubenswrapper[5058]: I1014 09:32:56.552274 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-7wfsc"] Oct 14 09:32:57 crc kubenswrapper[5058]: I1014 09:32:57.397061 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" event={"ID":"7e70a9ec-db2c-4945-823a-1392efdabd0e","Type":"ContainerStarted","Data":"e53dd16de82510ca19f83ea723cee7a6909cd283f9458aed2346ed7690b08b0f"} Oct 14 09:32:58 crc kubenswrapper[5058]: I1014 09:32:58.412817 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" event={"ID":"7e70a9ec-db2c-4945-823a-1392efdabd0e","Type":"ContainerStarted","Data":"bbd449dee326e7d1a19b810549c5cb635f3c35981d46991f72c2622952be94fd"} Oct 14 09:32:58 crc kubenswrapper[5058]: I1014 09:32:58.446739 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" podStartSLOduration=2.622553286 podStartE2EDuration="3.446721466s" podCreationTimestamp="2025-10-14 09:32:55 +0000 UTC" firstStartedPulling="2025-10-14 09:32:56.572065429 +0000 UTC m=+9924.483149245" lastFinishedPulling="2025-10-14 09:32:57.396233589 +0000 UTC m=+9925.307317425" observedRunningTime="2025-10-14 09:32:58.43489788 +0000 UTC m=+9926.345981746" watchObservedRunningTime="2025-10-14 09:32:58.446721466 +0000 UTC m=+9926.357805282" Oct 14 09:33:03 crc kubenswrapper[5058]: I1014 09:33:03.656633 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:33:03 crc kubenswrapper[5058]: I1014 09:33:03.657763 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:33:23 crc kubenswrapper[5058]: I1014 09:33:23.780256 5058 generic.go:334] "Generic (PLEG): container finished" podID="f168337a-a9d3-4bb4-b7ef-306370917b3f" containerID="c62a3391f2db78cc0f9c28552dce83773a11649866ea2962bf15fee5bea9eaa6" exitCode=0 Oct 14 09:33:23 crc kubenswrapper[5058]: I1014 09:33:23.780340 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" event={"ID":"f168337a-a9d3-4bb4-b7ef-306370917b3f","Type":"ContainerDied","Data":"c62a3391f2db78cc0f9c28552dce83773a11649866ea2962bf15fee5bea9eaa6"} Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.320468 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.416746 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-ssh-key\") pod \"f168337a-a9d3-4bb4-b7ef-306370917b3f\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.416871 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-nova-metadata-neutron-config-0\") pod \"f168337a-a9d3-4bb4-b7ef-306370917b3f\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.416938 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-metadata-combined-ca-bundle\") pod \"f168337a-a9d3-4bb4-b7ef-306370917b3f\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.417019 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f168337a-a9d3-4bb4-b7ef-306370917b3f\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.417083 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-inventory\") pod \"f168337a-a9d3-4bb4-b7ef-306370917b3f\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.417221 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v64f\" (UniqueName: \"kubernetes.io/projected/f168337a-a9d3-4bb4-b7ef-306370917b3f-kube-api-access-8v64f\") pod \"f168337a-a9d3-4bb4-b7ef-306370917b3f\" (UID: \"f168337a-a9d3-4bb4-b7ef-306370917b3f\") " Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.422296 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f168337a-a9d3-4bb4-b7ef-306370917b3f" (UID: "f168337a-a9d3-4bb4-b7ef-306370917b3f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.426216 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f168337a-a9d3-4bb4-b7ef-306370917b3f-kube-api-access-8v64f" (OuterVolumeSpecName: "kube-api-access-8v64f") pod "f168337a-a9d3-4bb4-b7ef-306370917b3f" (UID: "f168337a-a9d3-4bb4-b7ef-306370917b3f"). InnerVolumeSpecName "kube-api-access-8v64f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.453977 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f168337a-a9d3-4bb4-b7ef-306370917b3f" (UID: "f168337a-a9d3-4bb4-b7ef-306370917b3f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.456110 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-inventory" (OuterVolumeSpecName: "inventory") pod "f168337a-a9d3-4bb4-b7ef-306370917b3f" (UID: "f168337a-a9d3-4bb4-b7ef-306370917b3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.458253 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f168337a-a9d3-4bb4-b7ef-306370917b3f" (UID: "f168337a-a9d3-4bb4-b7ef-306370917b3f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.472679 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f168337a-a9d3-4bb4-b7ef-306370917b3f" (UID: "f168337a-a9d3-4bb4-b7ef-306370917b3f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.519925 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v64f\" (UniqueName: \"kubernetes.io/projected/f168337a-a9d3-4bb4-b7ef-306370917b3f-kube-api-access-8v64f\") on node \"crc\" DevicePath \"\"" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.519967 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.519980 5058 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.519993 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.520009 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.520024 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f168337a-a9d3-4bb4-b7ef-306370917b3f-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.807593 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" event={"ID":"f168337a-a9d3-4bb4-b7ef-306370917b3f","Type":"ContainerDied","Data":"d06931067cb3fa83ce4dc93e235bcf640e4b73f2864e6801d7eaaf362474d7e5"} Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.807637 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d06931067cb3fa83ce4dc93e235bcf640e4b73f2864e6801d7eaaf362474d7e5" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.807715 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell2-fwvnf" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.903837 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell2-nblb9"] Oct 14 09:33:25 crc kubenswrapper[5058]: E1014 09:33:25.904238 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f168337a-a9d3-4bb4-b7ef-306370917b3f" containerName="neutron-metadata-openstack-openstack-cell2" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.904255 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="f168337a-a9d3-4bb4-b7ef-306370917b3f" containerName="neutron-metadata-openstack-openstack-cell2" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.904462 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="f168337a-a9d3-4bb4-b7ef-306370917b3f" containerName="neutron-metadata-openstack-openstack-cell2" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.905185 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.909812 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.910254 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:33:25 crc kubenswrapper[5058]: I1014 09:33:25.923755 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell2-nblb9"] Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.030403 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.030494 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-inventory\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.030521 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-ssh-key\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.030607 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fqrt\" (UniqueName: \"kubernetes.io/projected/b82ae322-0931-45e2-9b71-f6be598d8245-kube-api-access-4fqrt\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.030642 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.132365 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-ssh-key\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.132939 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fqrt\" (UniqueName: \"kubernetes.io/projected/b82ae322-0931-45e2-9b71-f6be598d8245-kube-api-access-4fqrt\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.133004 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.133124 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.133261 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-inventory\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.137758 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.138578 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-ssh-key\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.138767 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-inventory\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.139868 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.159406 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fqrt\" (UniqueName: \"kubernetes.io/projected/b82ae322-0931-45e2-9b71-f6be598d8245-kube-api-access-4fqrt\") pod \"libvirt-openstack-openstack-cell2-nblb9\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.221060 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:33:26 crc kubenswrapper[5058]: W1014 09:33:26.895665 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb82ae322_0931_45e2_9b71_f6be598d8245.slice/crio-2bf7f553328d5eac511ba5dddbbf05a2bf44e21bb08b22e5952014d3134dd485 WatchSource:0}: Error finding container 2bf7f553328d5eac511ba5dddbbf05a2bf44e21bb08b22e5952014d3134dd485: Status 404 returned error can't find the container with id 2bf7f553328d5eac511ba5dddbbf05a2bf44e21bb08b22e5952014d3134dd485 Oct 14 09:33:26 crc kubenswrapper[5058]: I1014 09:33:26.897070 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell2-nblb9"] Oct 14 09:33:27 crc kubenswrapper[5058]: I1014 09:33:27.832671 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell2-nblb9" event={"ID":"b82ae322-0931-45e2-9b71-f6be598d8245","Type":"ContainerStarted","Data":"2bf7f553328d5eac511ba5dddbbf05a2bf44e21bb08b22e5952014d3134dd485"} Oct 14 09:33:28 crc kubenswrapper[5058]: I1014 09:33:28.849183 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell2-nblb9" event={"ID":"b82ae322-0931-45e2-9b71-f6be598d8245","Type":"ContainerStarted","Data":"e0a6cddee3818aae000bd80c95a928f26c894639636860cd6b5337fef0d22b8a"} Oct 14 09:33:28 crc kubenswrapper[5058]: I1014 09:33:28.881074 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell2-nblb9" podStartSLOduration=3.200093005 podStartE2EDuration="3.881049012s" podCreationTimestamp="2025-10-14 09:33:25 +0000 UTC" firstStartedPulling="2025-10-14 09:33:26.902773977 +0000 UTC m=+9954.813857793" lastFinishedPulling="2025-10-14 09:33:27.583729964 +0000 UTC m=+9955.494813800" observedRunningTime="2025-10-14 09:33:28.874484905 +0000 UTC m=+9956.785568711" watchObservedRunningTime="2025-10-14 09:33:28.881049012 +0000 UTC m=+9956.792132848" Oct 14 09:33:33 crc kubenswrapper[5058]: I1014 09:33:33.656074 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:33:33 crc kubenswrapper[5058]: I1014 09:33:33.656849 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:33:33 crc kubenswrapper[5058]: I1014 09:33:33.656919 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:33:33 crc kubenswrapper[5058]: I1014 09:33:33.658201 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:33:33 crc kubenswrapper[5058]: I1014 09:33:33.658302 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" gracePeriod=600 Oct 14 09:33:33 crc kubenswrapper[5058]: E1014 09:33:33.792980 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:33:33 crc kubenswrapper[5058]: I1014 09:33:33.962338 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" exitCode=0 Oct 14 09:33:33 crc kubenswrapper[5058]: I1014 09:33:33.962380 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47"} Oct 14 09:33:33 crc kubenswrapper[5058]: I1014 09:33:33.962413 5058 scope.go:117] "RemoveContainer" containerID="5c8b0488d0741775b8aef21ce61846f31d6ae906b86145654734d11d1b8a2ec4" Oct 14 09:33:33 crc kubenswrapper[5058]: I1014 09:33:33.963267 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:33:33 crc kubenswrapper[5058]: E1014 09:33:33.963730 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:33:46 crc kubenswrapper[5058]: I1014 09:33:46.790854 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:33:46 crc kubenswrapper[5058]: E1014 09:33:46.791870 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:33:58 crc kubenswrapper[5058]: I1014 09:33:58.790073 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:33:58 crc kubenswrapper[5058]: E1014 09:33:58.791229 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.070545 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t7cnx"] Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.086124 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.112509 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7cnx"] Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.191632 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-utilities\") pod \"redhat-operators-t7cnx\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.191766 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmtn\" (UniqueName: \"kubernetes.io/projected/220c88f5-f493-446e-b5f9-cd36abb3f80f-kube-api-access-pxmtn\") pod \"redhat-operators-t7cnx\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.191813 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-catalog-content\") pod \"redhat-operators-t7cnx\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.293620 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-utilities\") pod \"redhat-operators-t7cnx\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.293785 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmtn\" (UniqueName: \"kubernetes.io/projected/220c88f5-f493-446e-b5f9-cd36abb3f80f-kube-api-access-pxmtn\") pod \"redhat-operators-t7cnx\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.293858 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-catalog-content\") pod \"redhat-operators-t7cnx\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.294284 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-utilities\") pod \"redhat-operators-t7cnx\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.294467 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-catalog-content\") pod \"redhat-operators-t7cnx\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.312950 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmtn\" (UniqueName: \"kubernetes.io/projected/220c88f5-f493-446e-b5f9-cd36abb3f80f-kube-api-access-pxmtn\") pod \"redhat-operators-t7cnx\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.415880 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:09 crc kubenswrapper[5058]: I1014 09:34:09.927698 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t7cnx"] Oct 14 09:34:10 crc kubenswrapper[5058]: I1014 09:34:10.426092 5058 generic.go:334] "Generic (PLEG): container finished" podID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerID="c7c8955c1adf8dfd4ca693d83cf325eb67020efa48ce08c8a3d6d2c97d0f4469" exitCode=0 Oct 14 09:34:10 crc kubenswrapper[5058]: I1014 09:34:10.426209 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7cnx" event={"ID":"220c88f5-f493-446e-b5f9-cd36abb3f80f","Type":"ContainerDied","Data":"c7c8955c1adf8dfd4ca693d83cf325eb67020efa48ce08c8a3d6d2c97d0f4469"} Oct 14 09:34:10 crc kubenswrapper[5058]: I1014 09:34:10.426449 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7cnx" event={"ID":"220c88f5-f493-446e-b5f9-cd36abb3f80f","Type":"ContainerStarted","Data":"572f2133ed4a34af39b3d6579de4f6066a9dd39f15972dffb36fb8caf8ecd5a5"} Oct 14 09:34:11 crc kubenswrapper[5058]: I1014 09:34:11.443431 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7cnx" event={"ID":"220c88f5-f493-446e-b5f9-cd36abb3f80f","Type":"ContainerStarted","Data":"f32f99835f0534d4ad75bc4c640ec5b7b650dc3600555e84260ff5a04d8b65a0"} Oct 14 09:34:11 crc kubenswrapper[5058]: I1014 09:34:11.790654 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:34:11 crc kubenswrapper[5058]: E1014 09:34:11.791166 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:34:14 crc kubenswrapper[5058]: I1014 09:34:14.490452 5058 generic.go:334] "Generic (PLEG): container finished" podID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerID="f32f99835f0534d4ad75bc4c640ec5b7b650dc3600555e84260ff5a04d8b65a0" exitCode=0 Oct 14 09:34:14 crc kubenswrapper[5058]: I1014 09:34:14.490575 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7cnx" event={"ID":"220c88f5-f493-446e-b5f9-cd36abb3f80f","Type":"ContainerDied","Data":"f32f99835f0534d4ad75bc4c640ec5b7b650dc3600555e84260ff5a04d8b65a0"} Oct 14 09:34:16 crc kubenswrapper[5058]: I1014 09:34:16.527143 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7cnx" event={"ID":"220c88f5-f493-446e-b5f9-cd36abb3f80f","Type":"ContainerStarted","Data":"141a3cab70cd27945051916b9861577204e48e7d05b0c135df3346e9731b6d0c"} Oct 14 09:34:16 crc kubenswrapper[5058]: I1014 09:34:16.570769 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t7cnx" podStartSLOduration=3.075424158 podStartE2EDuration="7.570747969s" podCreationTimestamp="2025-10-14 09:34:09 +0000 UTC" firstStartedPulling="2025-10-14 09:34:10.428071407 +0000 UTC m=+9998.339155223" lastFinishedPulling="2025-10-14 09:34:14.923395198 +0000 UTC m=+10002.834479034" observedRunningTime="2025-10-14 09:34:16.557505963 +0000 UTC m=+10004.468589779" watchObservedRunningTime="2025-10-14 09:34:16.570747969 +0000 UTC m=+10004.481831785" Oct 14 09:34:19 crc kubenswrapper[5058]: I1014 09:34:19.416826 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:19 crc kubenswrapper[5058]: I1014 09:34:19.417473 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:20 crc kubenswrapper[5058]: I1014 09:34:20.508734 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t7cnx" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerName="registry-server" probeResult="failure" output=< Oct 14 09:34:20 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 09:34:20 crc kubenswrapper[5058]: > Oct 14 09:34:22 crc kubenswrapper[5058]: I1014 09:34:22.804485 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:34:22 crc kubenswrapper[5058]: E1014 09:34:22.806360 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:34:29 crc kubenswrapper[5058]: I1014 09:34:29.768710 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:29 crc kubenswrapper[5058]: I1014 09:34:29.852488 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:30 crc kubenswrapper[5058]: I1014 09:34:30.022398 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7cnx"] Oct 14 09:34:31 crc kubenswrapper[5058]: I1014 09:34:31.715233 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t7cnx" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerName="registry-server" containerID="cri-o://141a3cab70cd27945051916b9861577204e48e7d05b0c135df3346e9731b6d0c" gracePeriod=2 Oct 14 09:34:32 crc kubenswrapper[5058]: I1014 09:34:32.742441 5058 generic.go:334] "Generic (PLEG): container finished" podID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerID="141a3cab70cd27945051916b9861577204e48e7d05b0c135df3346e9731b6d0c" exitCode=0 Oct 14 09:34:32 crc kubenswrapper[5058]: I1014 09:34:32.742509 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7cnx" event={"ID":"220c88f5-f493-446e-b5f9-cd36abb3f80f","Type":"ContainerDied","Data":"141a3cab70cd27945051916b9861577204e48e7d05b0c135df3346e9731b6d0c"} Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.107430 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.304714 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxmtn\" (UniqueName: \"kubernetes.io/projected/220c88f5-f493-446e-b5f9-cd36abb3f80f-kube-api-access-pxmtn\") pod \"220c88f5-f493-446e-b5f9-cd36abb3f80f\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.304890 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-utilities\") pod \"220c88f5-f493-446e-b5f9-cd36abb3f80f\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.304934 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-catalog-content\") pod \"220c88f5-f493-446e-b5f9-cd36abb3f80f\" (UID: \"220c88f5-f493-446e-b5f9-cd36abb3f80f\") " Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.305832 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-utilities" (OuterVolumeSpecName: "utilities") pod "220c88f5-f493-446e-b5f9-cd36abb3f80f" (UID: "220c88f5-f493-446e-b5f9-cd36abb3f80f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.312969 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/220c88f5-f493-446e-b5f9-cd36abb3f80f-kube-api-access-pxmtn" (OuterVolumeSpecName: "kube-api-access-pxmtn") pod "220c88f5-f493-446e-b5f9-cd36abb3f80f" (UID: "220c88f5-f493-446e-b5f9-cd36abb3f80f"). InnerVolumeSpecName "kube-api-access-pxmtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.383082 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "220c88f5-f493-446e-b5f9-cd36abb3f80f" (UID: "220c88f5-f493-446e-b5f9-cd36abb3f80f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.407913 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxmtn\" (UniqueName: \"kubernetes.io/projected/220c88f5-f493-446e-b5f9-cd36abb3f80f-kube-api-access-pxmtn\") on node \"crc\" DevicePath \"\"" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.407971 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.407992 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/220c88f5-f493-446e-b5f9-cd36abb3f80f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.766226 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t7cnx" event={"ID":"220c88f5-f493-446e-b5f9-cd36abb3f80f","Type":"ContainerDied","Data":"572f2133ed4a34af39b3d6579de4f6066a9dd39f15972dffb36fb8caf8ecd5a5"} Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.766292 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t7cnx" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.766323 5058 scope.go:117] "RemoveContainer" containerID="141a3cab70cd27945051916b9861577204e48e7d05b0c135df3346e9731b6d0c" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.820584 5058 scope.go:117] "RemoveContainer" containerID="f32f99835f0534d4ad75bc4c640ec5b7b650dc3600555e84260ff5a04d8b65a0" Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.837363 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t7cnx"] Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.848704 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t7cnx"] Oct 14 09:34:33 crc kubenswrapper[5058]: I1014 09:34:33.884047 5058 scope.go:117] "RemoveContainer" containerID="c7c8955c1adf8dfd4ca693d83cf325eb67020efa48ce08c8a3d6d2c97d0f4469" Oct 14 09:34:34 crc kubenswrapper[5058]: E1014 09:34:34.006219 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod220c88f5_f493_446e_b5f9_cd36abb3f80f.slice/crio-572f2133ed4a34af39b3d6579de4f6066a9dd39f15972dffb36fb8caf8ecd5a5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod220c88f5_f493_446e_b5f9_cd36abb3f80f.slice\": RecentStats: unable to find data in memory cache]" Oct 14 09:34:34 crc kubenswrapper[5058]: I1014 09:34:34.815364 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" path="/var/lib/kubelet/pods/220c88f5-f493-446e-b5f9-cd36abb3f80f/volumes" Oct 14 09:34:37 crc kubenswrapper[5058]: I1014 09:34:37.791253 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:34:37 crc kubenswrapper[5058]: E1014 09:34:37.791904 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:34:51 crc kubenswrapper[5058]: I1014 09:34:51.791197 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:34:51 crc kubenswrapper[5058]: E1014 09:34:51.792693 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.141238 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pvsxs"] Oct 14 09:34:55 crc kubenswrapper[5058]: E1014 09:34:55.142161 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerName="extract-content" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.142174 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerName="extract-content" Oct 14 09:34:55 crc kubenswrapper[5058]: E1014 09:34:55.142188 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerName="extract-utilities" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.142194 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerName="extract-utilities" Oct 14 09:34:55 crc kubenswrapper[5058]: E1014 09:34:55.142219 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerName="registry-server" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.142225 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerName="registry-server" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.142416 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="220c88f5-f493-446e-b5f9-cd36abb3f80f" containerName="registry-server" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.144062 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.168317 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvsxs"] Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.234702 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-utilities\") pod \"redhat-marketplace-pvsxs\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.234773 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-catalog-content\") pod \"redhat-marketplace-pvsxs\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.234861 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkk5x\" (UniqueName: \"kubernetes.io/projected/3d46c89f-af0b-4617-8aaa-722f6791729f-kube-api-access-pkk5x\") pod \"redhat-marketplace-pvsxs\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.337359 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-utilities\") pod \"redhat-marketplace-pvsxs\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.337470 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-catalog-content\") pod \"redhat-marketplace-pvsxs\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.337616 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkk5x\" (UniqueName: \"kubernetes.io/projected/3d46c89f-af0b-4617-8aaa-722f6791729f-kube-api-access-pkk5x\") pod \"redhat-marketplace-pvsxs\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.337935 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-utilities\") pod \"redhat-marketplace-pvsxs\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.338263 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-catalog-content\") pod \"redhat-marketplace-pvsxs\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.363247 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkk5x\" (UniqueName: \"kubernetes.io/projected/3d46c89f-af0b-4617-8aaa-722f6791729f-kube-api-access-pkk5x\") pod \"redhat-marketplace-pvsxs\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.481011 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:34:55 crc kubenswrapper[5058]: I1014 09:34:55.989686 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvsxs"] Oct 14 09:34:56 crc kubenswrapper[5058]: I1014 09:34:56.073458 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvsxs" event={"ID":"3d46c89f-af0b-4617-8aaa-722f6791729f","Type":"ContainerStarted","Data":"d1ea1ad1a00b27ba889eb7b7fb757017c67ab6d4bead58c59a5605694c895e0b"} Oct 14 09:34:57 crc kubenswrapper[5058]: I1014 09:34:57.087295 5058 generic.go:334] "Generic (PLEG): container finished" podID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerID="712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f" exitCode=0 Oct 14 09:34:57 crc kubenswrapper[5058]: I1014 09:34:57.087377 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvsxs" event={"ID":"3d46c89f-af0b-4617-8aaa-722f6791729f","Type":"ContainerDied","Data":"712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f"} Oct 14 09:34:58 crc kubenswrapper[5058]: I1014 09:34:58.101043 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvsxs" event={"ID":"3d46c89f-af0b-4617-8aaa-722f6791729f","Type":"ContainerStarted","Data":"61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc"} Oct 14 09:34:59 crc kubenswrapper[5058]: I1014 09:34:59.118036 5058 generic.go:334] "Generic (PLEG): container finished" podID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerID="61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc" exitCode=0 Oct 14 09:34:59 crc kubenswrapper[5058]: I1014 09:34:59.118127 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvsxs" event={"ID":"3d46c89f-af0b-4617-8aaa-722f6791729f","Type":"ContainerDied","Data":"61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc"} Oct 14 09:35:00 crc kubenswrapper[5058]: I1014 09:35:00.129944 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvsxs" event={"ID":"3d46c89f-af0b-4617-8aaa-722f6791729f","Type":"ContainerStarted","Data":"242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a"} Oct 14 09:35:00 crc kubenswrapper[5058]: I1014 09:35:00.158403 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pvsxs" podStartSLOduration=2.667274532 podStartE2EDuration="5.158387921s" podCreationTimestamp="2025-10-14 09:34:55 +0000 UTC" firstStartedPulling="2025-10-14 09:34:57.090586951 +0000 UTC m=+10045.001670797" lastFinishedPulling="2025-10-14 09:34:59.58170037 +0000 UTC m=+10047.492784186" observedRunningTime="2025-10-14 09:35:00.149609121 +0000 UTC m=+10048.060692927" watchObservedRunningTime="2025-10-14 09:35:00.158387921 +0000 UTC m=+10048.069471727" Oct 14 09:35:04 crc kubenswrapper[5058]: I1014 09:35:04.790680 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:35:04 crc kubenswrapper[5058]: E1014 09:35:04.792018 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:35:05 crc kubenswrapper[5058]: I1014 09:35:05.481446 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:35:05 crc kubenswrapper[5058]: I1014 09:35:05.482358 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:35:05 crc kubenswrapper[5058]: I1014 09:35:05.575864 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:35:06 crc kubenswrapper[5058]: I1014 09:35:06.246980 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:35:06 crc kubenswrapper[5058]: I1014 09:35:06.810773 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvsxs"] Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.229959 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pvsxs" podUID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerName="registry-server" containerID="cri-o://242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a" gracePeriod=2 Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.752000 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.862502 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-utilities\") pod \"3d46c89f-af0b-4617-8aaa-722f6791729f\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.862723 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-catalog-content\") pod \"3d46c89f-af0b-4617-8aaa-722f6791729f\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.862964 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkk5x\" (UniqueName: \"kubernetes.io/projected/3d46c89f-af0b-4617-8aaa-722f6791729f-kube-api-access-pkk5x\") pod \"3d46c89f-af0b-4617-8aaa-722f6791729f\" (UID: \"3d46c89f-af0b-4617-8aaa-722f6791729f\") " Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.863462 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-utilities" (OuterVolumeSpecName: "utilities") pod "3d46c89f-af0b-4617-8aaa-722f6791729f" (UID: "3d46c89f-af0b-4617-8aaa-722f6791729f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.864133 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.868587 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d46c89f-af0b-4617-8aaa-722f6791729f-kube-api-access-pkk5x" (OuterVolumeSpecName: "kube-api-access-pkk5x") pod "3d46c89f-af0b-4617-8aaa-722f6791729f" (UID: "3d46c89f-af0b-4617-8aaa-722f6791729f"). InnerVolumeSpecName "kube-api-access-pkk5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.888347 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d46c89f-af0b-4617-8aaa-722f6791729f" (UID: "3d46c89f-af0b-4617-8aaa-722f6791729f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.966606 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d46c89f-af0b-4617-8aaa-722f6791729f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:35:08 crc kubenswrapper[5058]: I1014 09:35:08.967157 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkk5x\" (UniqueName: \"kubernetes.io/projected/3d46c89f-af0b-4617-8aaa-722f6791729f-kube-api-access-pkk5x\") on node \"crc\" DevicePath \"\"" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.248113 5058 generic.go:334] "Generic (PLEG): container finished" podID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerID="242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a" exitCode=0 Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.248173 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvsxs" event={"ID":"3d46c89f-af0b-4617-8aaa-722f6791729f","Type":"ContainerDied","Data":"242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a"} Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.248229 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvsxs" event={"ID":"3d46c89f-af0b-4617-8aaa-722f6791729f","Type":"ContainerDied","Data":"d1ea1ad1a00b27ba889eb7b7fb757017c67ab6d4bead58c59a5605694c895e0b"} Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.248249 5058 scope.go:117] "RemoveContainer" containerID="242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.249653 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvsxs" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.298586 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvsxs"] Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.299523 5058 scope.go:117] "RemoveContainer" containerID="61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.323991 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvsxs"] Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.337519 5058 scope.go:117] "RemoveContainer" containerID="712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.407383 5058 scope.go:117] "RemoveContainer" containerID="242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a" Oct 14 09:35:09 crc kubenswrapper[5058]: E1014 09:35:09.409305 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a\": container with ID starting with 242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a not found: ID does not exist" containerID="242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.409364 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a"} err="failed to get container status \"242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a\": rpc error: code = NotFound desc = could not find container \"242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a\": container with ID starting with 242978f3b5132c44cf6bcb6e9258e273743a8573397d17bc217294abe2094c5a not found: ID does not exist" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.409395 5058 scope.go:117] "RemoveContainer" containerID="61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc" Oct 14 09:35:09 crc kubenswrapper[5058]: E1014 09:35:09.410175 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc\": container with ID starting with 61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc not found: ID does not exist" containerID="61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.410217 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc"} err="failed to get container status \"61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc\": rpc error: code = NotFound desc = could not find container \"61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc\": container with ID starting with 61e9a590f2bb935e720b433ccd5f604a0343da4404b29ed81a395ea3f13616fc not found: ID does not exist" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.410254 5058 scope.go:117] "RemoveContainer" containerID="712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f" Oct 14 09:35:09 crc kubenswrapper[5058]: E1014 09:35:09.413692 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f\": container with ID starting with 712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f not found: ID does not exist" containerID="712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f" Oct 14 09:35:09 crc kubenswrapper[5058]: I1014 09:35:09.413752 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f"} err="failed to get container status \"712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f\": rpc error: code = NotFound desc = could not find container \"712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f\": container with ID starting with 712bf5ba7622601b9fc05dd2aa3d7a94dc058b21a59da99e55da0569055cda4f not found: ID does not exist" Oct 14 09:35:10 crc kubenswrapper[5058]: I1014 09:35:10.812043 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d46c89f-af0b-4617-8aaa-722f6791729f" path="/var/lib/kubelet/pods/3d46c89f-af0b-4617-8aaa-722f6791729f/volumes" Oct 14 09:35:18 crc kubenswrapper[5058]: I1014 09:35:18.790966 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:35:18 crc kubenswrapper[5058]: E1014 09:35:18.791516 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:35:33 crc kubenswrapper[5058]: I1014 09:35:33.789917 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:35:33 crc kubenswrapper[5058]: E1014 09:35:33.790955 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:35:44 crc kubenswrapper[5058]: I1014 09:35:44.794435 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:35:44 crc kubenswrapper[5058]: E1014 09:35:44.795557 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:35:55 crc kubenswrapper[5058]: I1014 09:35:55.790677 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:35:55 crc kubenswrapper[5058]: E1014 09:35:55.791526 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:36:10 crc kubenswrapper[5058]: I1014 09:36:10.790003 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:36:10 crc kubenswrapper[5058]: E1014 09:36:10.791073 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:36:23 crc kubenswrapper[5058]: I1014 09:36:23.791382 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:36:23 crc kubenswrapper[5058]: E1014 09:36:23.792486 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:36:36 crc kubenswrapper[5058]: I1014 09:36:36.792176 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:36:36 crc kubenswrapper[5058]: E1014 09:36:36.793197 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:36:50 crc kubenswrapper[5058]: I1014 09:36:50.790977 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:36:50 crc kubenswrapper[5058]: E1014 09:36:50.791917 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:37:04 crc kubenswrapper[5058]: I1014 09:37:04.790871 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:37:04 crc kubenswrapper[5058]: E1014 09:37:04.792503 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:37:16 crc kubenswrapper[5058]: I1014 09:37:16.790266 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:37:16 crc kubenswrapper[5058]: E1014 09:37:16.791466 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:37:31 crc kubenswrapper[5058]: I1014 09:37:31.792078 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:37:31 crc kubenswrapper[5058]: E1014 09:37:31.793732 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:37:36 crc kubenswrapper[5058]: I1014 09:37:36.288393 5058 generic.go:334] "Generic (PLEG): container finished" podID="7e70a9ec-db2c-4945-823a-1392efdabd0e" containerID="bbd449dee326e7d1a19b810549c5cb635f3c35981d46991f72c2622952be94fd" exitCode=0 Oct 14 09:37:36 crc kubenswrapper[5058]: I1014 09:37:36.289142 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" event={"ID":"7e70a9ec-db2c-4945-823a-1392efdabd0e","Type":"ContainerDied","Data":"bbd449dee326e7d1a19b810549c5cb635f3c35981d46991f72c2622952be94fd"} Oct 14 09:37:37 crc kubenswrapper[5058]: I1014 09:37:37.856032 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:37:37 crc kubenswrapper[5058]: I1014 09:37:37.974695 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltrz5\" (UniqueName: \"kubernetes.io/projected/7e70a9ec-db2c-4945-823a-1392efdabd0e-kube-api-access-ltrz5\") pod \"7e70a9ec-db2c-4945-823a-1392efdabd0e\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " Oct 14 09:37:37 crc kubenswrapper[5058]: I1014 09:37:37.974754 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-ssh-key\") pod \"7e70a9ec-db2c-4945-823a-1392efdabd0e\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " Oct 14 09:37:37 crc kubenswrapper[5058]: I1014 09:37:37.975002 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-secret-0\") pod \"7e70a9ec-db2c-4945-823a-1392efdabd0e\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " Oct 14 09:37:37 crc kubenswrapper[5058]: I1014 09:37:37.975059 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-inventory\") pod \"7e70a9ec-db2c-4945-823a-1392efdabd0e\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " Oct 14 09:37:37 crc kubenswrapper[5058]: I1014 09:37:37.975164 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-combined-ca-bundle\") pod \"7e70a9ec-db2c-4945-823a-1392efdabd0e\" (UID: \"7e70a9ec-db2c-4945-823a-1392efdabd0e\") " Oct 14 09:37:37 crc kubenswrapper[5058]: I1014 09:37:37.982606 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7e70a9ec-db2c-4945-823a-1392efdabd0e" (UID: "7e70a9ec-db2c-4945-823a-1392efdabd0e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:37:37 crc kubenswrapper[5058]: I1014 09:37:37.989120 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e70a9ec-db2c-4945-823a-1392efdabd0e-kube-api-access-ltrz5" (OuterVolumeSpecName: "kube-api-access-ltrz5") pod "7e70a9ec-db2c-4945-823a-1392efdabd0e" (UID: "7e70a9ec-db2c-4945-823a-1392efdabd0e"). InnerVolumeSpecName "kube-api-access-ltrz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.016573 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7e70a9ec-db2c-4945-823a-1392efdabd0e" (UID: "7e70a9ec-db2c-4945-823a-1392efdabd0e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.023024 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-inventory" (OuterVolumeSpecName: "inventory") pod "7e70a9ec-db2c-4945-823a-1392efdabd0e" (UID: "7e70a9ec-db2c-4945-823a-1392efdabd0e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.023786 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7e70a9ec-db2c-4945-823a-1392efdabd0e" (UID: "7e70a9ec-db2c-4945-823a-1392efdabd0e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.077300 5058 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.077338 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.077354 5058 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.077367 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltrz5\" (UniqueName: \"kubernetes.io/projected/7e70a9ec-db2c-4945-823a-1392efdabd0e-kube-api-access-ltrz5\") on node \"crc\" DevicePath \"\"" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.077379 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e70a9ec-db2c-4945-823a-1392efdabd0e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.320454 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" event={"ID":"7e70a9ec-db2c-4945-823a-1392efdabd0e","Type":"ContainerDied","Data":"e53dd16de82510ca19f83ea723cee7a6909cd283f9458aed2346ed7690b08b0f"} Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.320512 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e53dd16de82510ca19f83ea723cee7a6909cd283f9458aed2346ed7690b08b0f" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.320538 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-7wfsc" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.439536 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-vf28n"] Oct 14 09:37:38 crc kubenswrapper[5058]: E1014 09:37:38.440828 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerName="registry-server" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.440867 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerName="registry-server" Oct 14 09:37:38 crc kubenswrapper[5058]: E1014 09:37:38.440891 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerName="extract-content" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.441087 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerName="extract-content" Oct 14 09:37:38 crc kubenswrapper[5058]: E1014 09:37:38.441115 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerName="extract-utilities" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.441128 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerName="extract-utilities" Oct 14 09:37:38 crc kubenswrapper[5058]: E1014 09:37:38.441188 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e70a9ec-db2c-4945-823a-1392efdabd0e" containerName="libvirt-openstack-openstack-cell1" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.441204 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e70a9ec-db2c-4945-823a-1392efdabd0e" containerName="libvirt-openstack-openstack-cell1" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.441632 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d46c89f-af0b-4617-8aaa-722f6791729f" containerName="registry-server" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.441670 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e70a9ec-db2c-4945-823a-1392efdabd0e" containerName="libvirt-openstack-openstack-cell1" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.443055 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.445099 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.446256 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.446720 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.446971 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.447855 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.456165 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-vf28n"] Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.587181 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.587263 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.587568 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtms\" (UniqueName: \"kubernetes.io/projected/465814e2-5b61-4d58-bdde-96e74d43768a-kube-api-access-kjtms\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.587903 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.588247 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.588286 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.588403 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.588868 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-inventory\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.589036 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.691903 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.691961 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.692009 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.692094 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-inventory\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.692139 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.692190 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.692222 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.692267 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjtms\" (UniqueName: \"kubernetes.io/projected/465814e2-5b61-4d58-bdde-96e74d43768a-kube-api-access-kjtms\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.692323 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.693788 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.698034 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-inventory\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.698152 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.698565 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.699119 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.700037 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.700659 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.706698 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.712905 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjtms\" (UniqueName: \"kubernetes.io/projected/465814e2-5b61-4d58-bdde-96e74d43768a-kube-api-access-kjtms\") pod \"nova-cell1-openstack-openstack-cell1-vf28n\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:38 crc kubenswrapper[5058]: I1014 09:37:38.791166 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:37:39 crc kubenswrapper[5058]: I1014 09:37:39.428011 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:37:39 crc kubenswrapper[5058]: I1014 09:37:39.435893 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-vf28n"] Oct 14 09:37:40 crc kubenswrapper[5058]: I1014 09:37:40.345688 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" event={"ID":"465814e2-5b61-4d58-bdde-96e74d43768a","Type":"ContainerStarted","Data":"c3972bc693fcd75969b74b8ec01f43aa60b978fdc5e5b1f12c2cf675a1587e6d"} Oct 14 09:37:41 crc kubenswrapper[5058]: I1014 09:37:41.365980 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" event={"ID":"465814e2-5b61-4d58-bdde-96e74d43768a","Type":"ContainerStarted","Data":"4d7d3ad9d2f808ade9a3f84d2adffd8368846ee899cfe024a6839896c4b31c41"} Oct 14 09:37:41 crc kubenswrapper[5058]: I1014 09:37:41.406894 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" podStartSLOduration=2.928744091 podStartE2EDuration="3.406868029s" podCreationTimestamp="2025-10-14 09:37:38 +0000 UTC" firstStartedPulling="2025-10-14 09:37:39.427721211 +0000 UTC m=+10207.338805027" lastFinishedPulling="2025-10-14 09:37:39.905845139 +0000 UTC m=+10207.816928965" observedRunningTime="2025-10-14 09:37:41.394385224 +0000 UTC m=+10209.305469060" watchObservedRunningTime="2025-10-14 09:37:41.406868029 +0000 UTC m=+10209.317951865" Oct 14 09:37:46 crc kubenswrapper[5058]: I1014 09:37:46.791004 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:37:46 crc kubenswrapper[5058]: E1014 09:37:46.792084 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:37:58 crc kubenswrapper[5058]: I1014 09:37:58.790488 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:37:58 crc kubenswrapper[5058]: E1014 09:37:58.791761 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:38:11 crc kubenswrapper[5058]: I1014 09:38:11.790471 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:38:11 crc kubenswrapper[5058]: E1014 09:38:11.791945 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.236285 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x8skm"] Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.239654 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.249012 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8skm"] Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.369502 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-catalog-content\") pod \"certified-operators-x8skm\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.369580 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hq9j\" (UniqueName: \"kubernetes.io/projected/b21b78da-17ce-4230-ae10-c9ffa39a8aef-kube-api-access-6hq9j\") pod \"certified-operators-x8skm\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.370276 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-utilities\") pod \"certified-operators-x8skm\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.472186 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-utilities\") pod \"certified-operators-x8skm\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.472273 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-catalog-content\") pod \"certified-operators-x8skm\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.472324 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hq9j\" (UniqueName: \"kubernetes.io/projected/b21b78da-17ce-4230-ae10-c9ffa39a8aef-kube-api-access-6hq9j\") pod \"certified-operators-x8skm\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.472855 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-utilities\") pod \"certified-operators-x8skm\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.472887 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-catalog-content\") pod \"certified-operators-x8skm\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.502651 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hq9j\" (UniqueName: \"kubernetes.io/projected/b21b78da-17ce-4230-ae10-c9ffa39a8aef-kube-api-access-6hq9j\") pod \"certified-operators-x8skm\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.577465 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.634689 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zwsmw"] Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.638011 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.658313 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwsmw"] Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.781286 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-utilities\") pod \"community-operators-zwsmw\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.781341 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xdbw\" (UniqueName: \"kubernetes.io/projected/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-kube-api-access-7xdbw\") pod \"community-operators-zwsmw\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.781771 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-catalog-content\") pod \"community-operators-zwsmw\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.884529 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-utilities\") pod \"community-operators-zwsmw\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.884577 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xdbw\" (UniqueName: \"kubernetes.io/projected/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-kube-api-access-7xdbw\") pod \"community-operators-zwsmw\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.884677 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-catalog-content\") pod \"community-operators-zwsmw\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.885307 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-catalog-content\") pod \"community-operators-zwsmw\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.885410 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-utilities\") pod \"community-operators-zwsmw\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:16 crc kubenswrapper[5058]: I1014 09:38:16.916806 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xdbw\" (UniqueName: \"kubernetes.io/projected/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-kube-api-access-7xdbw\") pod \"community-operators-zwsmw\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:17 crc kubenswrapper[5058]: I1014 09:38:17.037359 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:17 crc kubenswrapper[5058]: I1014 09:38:17.127853 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8skm"] Oct 14 09:38:17 crc kubenswrapper[5058]: I1014 09:38:17.580402 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zwsmw"] Oct 14 09:38:17 crc kubenswrapper[5058]: I1014 09:38:17.855545 5058 generic.go:334] "Generic (PLEG): container finished" podID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerID="f6df5ef160235a4658a4b9102ac1bb83c33b576dab1de3dfc79cbb193ea9bd90" exitCode=0 Oct 14 09:38:17 crc kubenswrapper[5058]: I1014 09:38:17.855733 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwsmw" event={"ID":"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a","Type":"ContainerDied","Data":"f6df5ef160235a4658a4b9102ac1bb83c33b576dab1de3dfc79cbb193ea9bd90"} Oct 14 09:38:17 crc kubenswrapper[5058]: I1014 09:38:17.855836 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwsmw" event={"ID":"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a","Type":"ContainerStarted","Data":"4931c091d476c9dd2d129fa2a21a7d5594a266c5ec5728518d018ace6662bf02"} Oct 14 09:38:17 crc kubenswrapper[5058]: I1014 09:38:17.858306 5058 generic.go:334] "Generic (PLEG): container finished" podID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerID="cbf00c945118e024bc49db68f1e043f3a4ad689b3ef67d39526af9a3c34a559b" exitCode=0 Oct 14 09:38:17 crc kubenswrapper[5058]: I1014 09:38:17.858369 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8skm" event={"ID":"b21b78da-17ce-4230-ae10-c9ffa39a8aef","Type":"ContainerDied","Data":"cbf00c945118e024bc49db68f1e043f3a4ad689b3ef67d39526af9a3c34a559b"} Oct 14 09:38:17 crc kubenswrapper[5058]: I1014 09:38:17.858401 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8skm" event={"ID":"b21b78da-17ce-4230-ae10-c9ffa39a8aef","Type":"ContainerStarted","Data":"c938432d19d6eeacbc66b7e467a6416dbf58d3e98a9264b137e764fb78c2af22"} Oct 14 09:38:18 crc kubenswrapper[5058]: I1014 09:38:18.873987 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwsmw" event={"ID":"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a","Type":"ContainerStarted","Data":"0ede5dc941af050726dd1188e022a9ef0f4644d06979ebf75828c08c9f2d5668"} Oct 14 09:38:18 crc kubenswrapper[5058]: I1014 09:38:18.876580 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8skm" event={"ID":"b21b78da-17ce-4230-ae10-c9ffa39a8aef","Type":"ContainerStarted","Data":"6bb0a3fd2b2036bb60a3c5daf4ca7f2af4ef09688579bd4c14ad2c39389692e8"} Oct 14 09:38:20 crc kubenswrapper[5058]: E1014 09:38:20.880009 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb21b78da_17ce_4230_ae10_c9ffa39a8aef.slice/crio-6bb0a3fd2b2036bb60a3c5daf4ca7f2af4ef09688579bd4c14ad2c39389692e8.scope\": RecentStats: unable to find data in memory cache]" Oct 14 09:38:20 crc kubenswrapper[5058]: I1014 09:38:20.899593 5058 generic.go:334] "Generic (PLEG): container finished" podID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerID="0ede5dc941af050726dd1188e022a9ef0f4644d06979ebf75828c08c9f2d5668" exitCode=0 Oct 14 09:38:20 crc kubenswrapper[5058]: I1014 09:38:20.899630 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwsmw" event={"ID":"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a","Type":"ContainerDied","Data":"0ede5dc941af050726dd1188e022a9ef0f4644d06979ebf75828c08c9f2d5668"} Oct 14 09:38:20 crc kubenswrapper[5058]: I1014 09:38:20.914386 5058 generic.go:334] "Generic (PLEG): container finished" podID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerID="6bb0a3fd2b2036bb60a3c5daf4ca7f2af4ef09688579bd4c14ad2c39389692e8" exitCode=0 Oct 14 09:38:20 crc kubenswrapper[5058]: I1014 09:38:20.914423 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8skm" event={"ID":"b21b78da-17ce-4230-ae10-c9ffa39a8aef","Type":"ContainerDied","Data":"6bb0a3fd2b2036bb60a3c5daf4ca7f2af4ef09688579bd4c14ad2c39389692e8"} Oct 14 09:38:21 crc kubenswrapper[5058]: I1014 09:38:21.930846 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwsmw" event={"ID":"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a","Type":"ContainerStarted","Data":"fcedf4216f5b4589494b5f8dd3e2e8e2503e982a209cbfcae7135392e24c3d98"} Oct 14 09:38:21 crc kubenswrapper[5058]: I1014 09:38:21.933913 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8skm" event={"ID":"b21b78da-17ce-4230-ae10-c9ffa39a8aef","Type":"ContainerStarted","Data":"24cc8800f6f08673aae8648868eb7ccfc1af2bf90f467bacef08909af8e57015"} Oct 14 09:38:21 crc kubenswrapper[5058]: I1014 09:38:21.954164 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zwsmw" podStartSLOduration=2.39121217 podStartE2EDuration="5.954140282s" podCreationTimestamp="2025-10-14 09:38:16 +0000 UTC" firstStartedPulling="2025-10-14 09:38:17.857928773 +0000 UTC m=+10245.769012579" lastFinishedPulling="2025-10-14 09:38:21.420856875 +0000 UTC m=+10249.331940691" observedRunningTime="2025-10-14 09:38:21.95056534 +0000 UTC m=+10249.861649166" watchObservedRunningTime="2025-10-14 09:38:21.954140282 +0000 UTC m=+10249.865224098" Oct 14 09:38:21 crc kubenswrapper[5058]: I1014 09:38:21.986567 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x8skm" podStartSLOduration=2.471574785 podStartE2EDuration="5.986546713s" podCreationTimestamp="2025-10-14 09:38:16 +0000 UTC" firstStartedPulling="2025-10-14 09:38:17.862115462 +0000 UTC m=+10245.773199268" lastFinishedPulling="2025-10-14 09:38:21.37708738 +0000 UTC m=+10249.288171196" observedRunningTime="2025-10-14 09:38:21.97129266 +0000 UTC m=+10249.882376496" watchObservedRunningTime="2025-10-14 09:38:21.986546713 +0000 UTC m=+10249.897630519" Oct 14 09:38:22 crc kubenswrapper[5058]: I1014 09:38:22.799205 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:38:22 crc kubenswrapper[5058]: E1014 09:38:22.800397 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:38:26 crc kubenswrapper[5058]: I1014 09:38:26.578364 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:26 crc kubenswrapper[5058]: I1014 09:38:26.578752 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:26 crc kubenswrapper[5058]: I1014 09:38:26.630701 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:27 crc kubenswrapper[5058]: I1014 09:38:27.038277 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:27 crc kubenswrapper[5058]: I1014 09:38:27.038591 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:27 crc kubenswrapper[5058]: I1014 09:38:27.095913 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:27 crc kubenswrapper[5058]: I1014 09:38:27.106859 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:28 crc kubenswrapper[5058]: I1014 09:38:28.040551 5058 generic.go:334] "Generic (PLEG): container finished" podID="b82ae322-0931-45e2-9b71-f6be598d8245" containerID="e0a6cddee3818aae000bd80c95a928f26c894639636860cd6b5337fef0d22b8a" exitCode=0 Oct 14 09:38:28 crc kubenswrapper[5058]: I1014 09:38:28.040616 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell2-nblb9" event={"ID":"b82ae322-0931-45e2-9b71-f6be598d8245","Type":"ContainerDied","Data":"e0a6cddee3818aae000bd80c95a928f26c894639636860cd6b5337fef0d22b8a"} Oct 14 09:38:28 crc kubenswrapper[5058]: I1014 09:38:28.096004 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.586754 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.612149 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8skm"] Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.612377 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x8skm" podUID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerName="registry-server" containerID="cri-o://24cc8800f6f08673aae8648868eb7ccfc1af2bf90f467bacef08909af8e57015" gracePeriod=2 Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.615690 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-ssh-key\") pod \"b82ae322-0931-45e2-9b71-f6be598d8245\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.615873 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fqrt\" (UniqueName: \"kubernetes.io/projected/b82ae322-0931-45e2-9b71-f6be598d8245-kube-api-access-4fqrt\") pod \"b82ae322-0931-45e2-9b71-f6be598d8245\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.616106 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-combined-ca-bundle\") pod \"b82ae322-0931-45e2-9b71-f6be598d8245\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.616212 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-secret-0\") pod \"b82ae322-0931-45e2-9b71-f6be598d8245\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.616289 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-inventory\") pod \"b82ae322-0931-45e2-9b71-f6be598d8245\" (UID: \"b82ae322-0931-45e2-9b71-f6be598d8245\") " Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.624012 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82ae322-0931-45e2-9b71-f6be598d8245-kube-api-access-4fqrt" (OuterVolumeSpecName: "kube-api-access-4fqrt") pod "b82ae322-0931-45e2-9b71-f6be598d8245" (UID: "b82ae322-0931-45e2-9b71-f6be598d8245"). InnerVolumeSpecName "kube-api-access-4fqrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.628866 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b82ae322-0931-45e2-9b71-f6be598d8245" (UID: "b82ae322-0931-45e2-9b71-f6be598d8245"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.655011 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b82ae322-0931-45e2-9b71-f6be598d8245" (UID: "b82ae322-0931-45e2-9b71-f6be598d8245"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.663889 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b82ae322-0931-45e2-9b71-f6be598d8245" (UID: "b82ae322-0931-45e2-9b71-f6be598d8245"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.677840 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-inventory" (OuterVolumeSpecName: "inventory") pod "b82ae322-0931-45e2-9b71-f6be598d8245" (UID: "b82ae322-0931-45e2-9b71-f6be598d8245"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.719274 5058 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.719318 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.719332 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.719347 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fqrt\" (UniqueName: \"kubernetes.io/projected/b82ae322-0931-45e2-9b71-f6be598d8245-kube-api-access-4fqrt\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:29 crc kubenswrapper[5058]: I1014 09:38:29.719364 5058 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82ae322-0931-45e2-9b71-f6be598d8245-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.072049 5058 generic.go:334] "Generic (PLEG): container finished" podID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerID="24cc8800f6f08673aae8648868eb7ccfc1af2bf90f467bacef08909af8e57015" exitCode=0 Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.072114 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8skm" event={"ID":"b21b78da-17ce-4230-ae10-c9ffa39a8aef","Type":"ContainerDied","Data":"24cc8800f6f08673aae8648868eb7ccfc1af2bf90f467bacef08909af8e57015"} Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.074489 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell2-nblb9" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.074516 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell2-nblb9" event={"ID":"b82ae322-0931-45e2-9b71-f6be598d8245","Type":"ContainerDied","Data":"2bf7f553328d5eac511ba5dddbbf05a2bf44e21bb08b22e5952014d3134dd485"} Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.074573 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bf7f553328d5eac511ba5dddbbf05a2bf44e21bb08b22e5952014d3134dd485" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.138629 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.228246 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hq9j\" (UniqueName: \"kubernetes.io/projected/b21b78da-17ce-4230-ae10-c9ffa39a8aef-kube-api-access-6hq9j\") pod \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.228368 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-catalog-content\") pod \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.228509 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-utilities\") pod \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\" (UID: \"b21b78da-17ce-4230-ae10-c9ffa39a8aef\") " Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.229881 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-utilities" (OuterVolumeSpecName: "utilities") pod "b21b78da-17ce-4230-ae10-c9ffa39a8aef" (UID: "b21b78da-17ce-4230-ae10-c9ffa39a8aef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.235464 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21b78da-17ce-4230-ae10-c9ffa39a8aef-kube-api-access-6hq9j" (OuterVolumeSpecName: "kube-api-access-6hq9j") pod "b21b78da-17ce-4230-ae10-c9ffa39a8aef" (UID: "b21b78da-17ce-4230-ae10-c9ffa39a8aef"). InnerVolumeSpecName "kube-api-access-6hq9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.274091 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell2-openstack-openstack-cell2-j2jsg"] Oct 14 09:38:30 crc kubenswrapper[5058]: E1014 09:38:30.274693 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82ae322-0931-45e2-9b71-f6be598d8245" containerName="libvirt-openstack-openstack-cell2" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.274718 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82ae322-0931-45e2-9b71-f6be598d8245" containerName="libvirt-openstack-openstack-cell2" Oct 14 09:38:30 crc kubenswrapper[5058]: E1014 09:38:30.274740 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerName="extract-utilities" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.274749 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerName="extract-utilities" Oct 14 09:38:30 crc kubenswrapper[5058]: E1014 09:38:30.274770 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerName="registry-server" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.274778 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerName="registry-server" Oct 14 09:38:30 crc kubenswrapper[5058]: E1014 09:38:30.274813 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerName="extract-content" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.274822 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerName="extract-content" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.275066 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" containerName="registry-server" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.275093 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82ae322-0931-45e2-9b71-f6be598d8245" containerName="libvirt-openstack-openstack-cell2" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.276040 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.280728 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.282879 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.283806 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-openstack-openstack-cell2-j2jsg"] Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.284834 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-compute-config" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.301735 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b21b78da-17ce-4230-ae10-c9ffa39a8aef" (UID: "b21b78da-17ce-4230-ae10-c9ffa39a8aef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.331425 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-1\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.331774 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-inventory\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.331810 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-0\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.331840 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4swvn\" (UniqueName: \"kubernetes.io/projected/42904e5c-c197-40d7-bf0d-157c20982f80-kube-api-access-4swvn\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.331896 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-ssh-key\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.331939 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-0\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.331965 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-1\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.332022 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cells-global-config-0\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.332056 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-combined-ca-bundle\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.332104 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.332115 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hq9j\" (UniqueName: \"kubernetes.io/projected/b21b78da-17ce-4230-ae10-c9ffa39a8aef-kube-api-access-6hq9j\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.332124 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b21b78da-17ce-4230-ae10-c9ffa39a8aef-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.433725 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-ssh-key\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.433848 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-0\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.433912 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-1\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.433997 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cells-global-config-0\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.434044 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-combined-ca-bundle\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.434094 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-1\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.434137 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-inventory\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.434159 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-0\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.434194 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4swvn\" (UniqueName: \"kubernetes.io/projected/42904e5c-c197-40d7-bf0d-157c20982f80-kube-api-access-4swvn\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.435432 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cells-global-config-0\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.437958 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-0\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.438026 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-inventory\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.438491 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-ssh-key\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.439176 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-1\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.439523 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-combined-ca-bundle\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.440580 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-1\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.444204 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-0\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.452281 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4swvn\" (UniqueName: \"kubernetes.io/projected/42904e5c-c197-40d7-bf0d-157c20982f80-kube-api-access-4swvn\") pod \"nova-cell2-openstack-openstack-cell2-j2jsg\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:30 crc kubenswrapper[5058]: I1014 09:38:30.598582 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.087073 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8skm" event={"ID":"b21b78da-17ce-4230-ae10-c9ffa39a8aef","Type":"ContainerDied","Data":"c938432d19d6eeacbc66b7e467a6416dbf58d3e98a9264b137e764fb78c2af22"} Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.087368 5058 scope.go:117] "RemoveContainer" containerID="24cc8800f6f08673aae8648868eb7ccfc1af2bf90f467bacef08909af8e57015" Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.087514 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8skm" Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.151060 5058 scope.go:117] "RemoveContainer" containerID="6bb0a3fd2b2036bb60a3c5daf4ca7f2af4ef09688579bd4c14ad2c39389692e8" Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.154626 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8skm"] Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.181499 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x8skm"] Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.185544 5058 scope.go:117] "RemoveContainer" containerID="cbf00c945118e024bc49db68f1e043f3a4ad689b3ef67d39526af9a3c34a559b" Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.245955 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-openstack-openstack-cell2-j2jsg"] Oct 14 09:38:31 crc kubenswrapper[5058]: W1014 09:38:31.246683 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42904e5c_c197_40d7_bf0d_157c20982f80.slice/crio-4106d8849f467d61c5b20473afb20eb37039da4d46b5fc5042916f7e18eba850 WatchSource:0}: Error finding container 4106d8849f467d61c5b20473afb20eb37039da4d46b5fc5042916f7e18eba850: Status 404 returned error can't find the container with id 4106d8849f467d61c5b20473afb20eb37039da4d46b5fc5042916f7e18eba850 Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.815889 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwsmw"] Oct 14 09:38:31 crc kubenswrapper[5058]: I1014 09:38:31.816445 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zwsmw" podUID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerName="registry-server" containerID="cri-o://fcedf4216f5b4589494b5f8dd3e2e8e2503e982a209cbfcae7135392e24c3d98" gracePeriod=2 Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.106872 5058 generic.go:334] "Generic (PLEG): container finished" podID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerID="fcedf4216f5b4589494b5f8dd3e2e8e2503e982a209cbfcae7135392e24c3d98" exitCode=0 Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.106976 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwsmw" event={"ID":"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a","Type":"ContainerDied","Data":"fcedf4216f5b4589494b5f8dd3e2e8e2503e982a209cbfcae7135392e24c3d98"} Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.109155 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" event={"ID":"42904e5c-c197-40d7-bf0d-157c20982f80","Type":"ContainerStarted","Data":"4106d8849f467d61c5b20473afb20eb37039da4d46b5fc5042916f7e18eba850"} Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.320903 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.374530 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xdbw\" (UniqueName: \"kubernetes.io/projected/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-kube-api-access-7xdbw\") pod \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.374628 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-utilities\") pod \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.374700 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-catalog-content\") pod \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\" (UID: \"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a\") " Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.379995 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-utilities" (OuterVolumeSpecName: "utilities") pod "df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" (UID: "df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.385417 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-kube-api-access-7xdbw" (OuterVolumeSpecName: "kube-api-access-7xdbw") pod "df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" (UID: "df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a"). InnerVolumeSpecName "kube-api-access-7xdbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.428818 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" (UID: "df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.477728 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.477785 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.477830 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xdbw\" (UniqueName: \"kubernetes.io/projected/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a-kube-api-access-7xdbw\") on node \"crc\" DevicePath \"\"" Oct 14 09:38:32 crc kubenswrapper[5058]: I1014 09:38:32.819038 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21b78da-17ce-4230-ae10-c9ffa39a8aef" path="/var/lib/kubelet/pods/b21b78da-17ce-4230-ae10-c9ffa39a8aef/volumes" Oct 14 09:38:33 crc kubenswrapper[5058]: I1014 09:38:33.126832 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zwsmw" event={"ID":"df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a","Type":"ContainerDied","Data":"4931c091d476c9dd2d129fa2a21a7d5594a266c5ec5728518d018ace6662bf02"} Oct 14 09:38:33 crc kubenswrapper[5058]: I1014 09:38:33.126870 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zwsmw" Oct 14 09:38:33 crc kubenswrapper[5058]: I1014 09:38:33.126882 5058 scope.go:117] "RemoveContainer" containerID="fcedf4216f5b4589494b5f8dd3e2e8e2503e982a209cbfcae7135392e24c3d98" Oct 14 09:38:33 crc kubenswrapper[5058]: I1014 09:38:33.129451 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" event={"ID":"42904e5c-c197-40d7-bf0d-157c20982f80","Type":"ContainerStarted","Data":"eeeb8114820007feb0561fb2da4795048ab21bdcb488fa2b998b2a395cb03603"} Oct 14 09:38:33 crc kubenswrapper[5058]: I1014 09:38:33.154019 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" podStartSLOduration=2.624261088 podStartE2EDuration="3.153997924s" podCreationTimestamp="2025-10-14 09:38:30 +0000 UTC" firstStartedPulling="2025-10-14 09:38:31.249970842 +0000 UTC m=+10259.161054658" lastFinishedPulling="2025-10-14 09:38:31.779707688 +0000 UTC m=+10259.690791494" observedRunningTime="2025-10-14 09:38:33.148882199 +0000 UTC m=+10261.059966025" watchObservedRunningTime="2025-10-14 09:38:33.153997924 +0000 UTC m=+10261.065081730" Oct 14 09:38:33 crc kubenswrapper[5058]: I1014 09:38:33.170903 5058 scope.go:117] "RemoveContainer" containerID="0ede5dc941af050726dd1188e022a9ef0f4644d06979ebf75828c08c9f2d5668" Oct 14 09:38:33 crc kubenswrapper[5058]: I1014 09:38:33.176925 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zwsmw"] Oct 14 09:38:33 crc kubenswrapper[5058]: I1014 09:38:33.187918 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zwsmw"] Oct 14 09:38:33 crc kubenswrapper[5058]: I1014 09:38:33.917821 5058 scope.go:117] "RemoveContainer" containerID="f6df5ef160235a4658a4b9102ac1bb83c33b576dab1de3dfc79cbb193ea9bd90" Oct 14 09:38:34 crc kubenswrapper[5058]: I1014 09:38:34.829983 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" path="/var/lib/kubelet/pods/df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a/volumes" Oct 14 09:38:36 crc kubenswrapper[5058]: I1014 09:38:36.791127 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:38:37 crc kubenswrapper[5058]: I1014 09:38:37.195766 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"7f0647e438d7d9cbf59051e98d856b9a2193fba42fa5cc942c1d18db50496b28"} Oct 14 09:41:03 crc kubenswrapper[5058]: I1014 09:41:03.656085 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:41:03 crc kubenswrapper[5058]: I1014 09:41:03.657263 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:41:23 crc kubenswrapper[5058]: I1014 09:41:23.413433 5058 generic.go:334] "Generic (PLEG): container finished" podID="465814e2-5b61-4d58-bdde-96e74d43768a" containerID="4d7d3ad9d2f808ade9a3f84d2adffd8368846ee899cfe024a6839896c4b31c41" exitCode=0 Oct 14 09:41:23 crc kubenswrapper[5058]: I1014 09:41:23.413512 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" event={"ID":"465814e2-5b61-4d58-bdde-96e74d43768a","Type":"ContainerDied","Data":"4d7d3ad9d2f808ade9a3f84d2adffd8368846ee899cfe024a6839896c4b31c41"} Oct 14 09:41:24 crc kubenswrapper[5058]: I1014 09:41:24.975413 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.113097 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cells-global-config-0\") pod \"465814e2-5b61-4d58-bdde-96e74d43768a\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.113169 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-0\") pod \"465814e2-5b61-4d58-bdde-96e74d43768a\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.113201 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-ssh-key\") pod \"465814e2-5b61-4d58-bdde-96e74d43768a\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.113236 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjtms\" (UniqueName: \"kubernetes.io/projected/465814e2-5b61-4d58-bdde-96e74d43768a-kube-api-access-kjtms\") pod \"465814e2-5b61-4d58-bdde-96e74d43768a\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.113362 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-0\") pod \"465814e2-5b61-4d58-bdde-96e74d43768a\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.113506 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-inventory\") pod \"465814e2-5b61-4d58-bdde-96e74d43768a\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.113528 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-1\") pod \"465814e2-5b61-4d58-bdde-96e74d43768a\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.113561 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-combined-ca-bundle\") pod \"465814e2-5b61-4d58-bdde-96e74d43768a\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.113593 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-1\") pod \"465814e2-5b61-4d58-bdde-96e74d43768a\" (UID: \"465814e2-5b61-4d58-bdde-96e74d43768a\") " Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.126973 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "465814e2-5b61-4d58-bdde-96e74d43768a" (UID: "465814e2-5b61-4d58-bdde-96e74d43768a"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.127030 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465814e2-5b61-4d58-bdde-96e74d43768a-kube-api-access-kjtms" (OuterVolumeSpecName: "kube-api-access-kjtms") pod "465814e2-5b61-4d58-bdde-96e74d43768a" (UID: "465814e2-5b61-4d58-bdde-96e74d43768a"). InnerVolumeSpecName "kube-api-access-kjtms". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.142933 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "465814e2-5b61-4d58-bdde-96e74d43768a" (UID: "465814e2-5b61-4d58-bdde-96e74d43768a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.143168 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "465814e2-5b61-4d58-bdde-96e74d43768a" (UID: "465814e2-5b61-4d58-bdde-96e74d43768a"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.143266 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "465814e2-5b61-4d58-bdde-96e74d43768a" (UID: "465814e2-5b61-4d58-bdde-96e74d43768a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.150701 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "465814e2-5b61-4d58-bdde-96e74d43768a" (UID: "465814e2-5b61-4d58-bdde-96e74d43768a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.153520 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "465814e2-5b61-4d58-bdde-96e74d43768a" (UID: "465814e2-5b61-4d58-bdde-96e74d43768a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.156518 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-inventory" (OuterVolumeSpecName: "inventory") pod "465814e2-5b61-4d58-bdde-96e74d43768a" (UID: "465814e2-5b61-4d58-bdde-96e74d43768a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.170713 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "465814e2-5b61-4d58-bdde-96e74d43768a" (UID: "465814e2-5b61-4d58-bdde-96e74d43768a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.216878 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.217206 5058 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.217228 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.217247 5058 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.217266 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.217284 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjtms\" (UniqueName: \"kubernetes.io/projected/465814e2-5b61-4d58-bdde-96e74d43768a-kube-api-access-kjtms\") on node \"crc\" DevicePath \"\"" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.217301 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.217318 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.217335 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/465814e2-5b61-4d58-bdde-96e74d43768a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.444727 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" event={"ID":"465814e2-5b61-4d58-bdde-96e74d43768a","Type":"ContainerDied","Data":"c3972bc693fcd75969b74b8ec01f43aa60b978fdc5e5b1f12c2cf675a1587e6d"} Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.444791 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3972bc693fcd75969b74b8ec01f43aa60b978fdc5e5b1f12c2cf675a1587e6d" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.444902 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-vf28n" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.564614 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-lcqlj"] Oct 14 09:41:25 crc kubenswrapper[5058]: E1014 09:41:25.565119 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerName="extract-content" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.565140 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerName="extract-content" Oct 14 09:41:25 crc kubenswrapper[5058]: E1014 09:41:25.565176 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerName="extract-utilities" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.565186 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerName="extract-utilities" Oct 14 09:41:25 crc kubenswrapper[5058]: E1014 09:41:25.565196 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerName="registry-server" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.565204 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerName="registry-server" Oct 14 09:41:25 crc kubenswrapper[5058]: E1014 09:41:25.565229 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465814e2-5b61-4d58-bdde-96e74d43768a" containerName="nova-cell1-openstack-openstack-cell1" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.565238 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="465814e2-5b61-4d58-bdde-96e74d43768a" containerName="nova-cell1-openstack-openstack-cell1" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.565502 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="465814e2-5b61-4d58-bdde-96e74d43768a" containerName="nova-cell1-openstack-openstack-cell1" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.565543 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="df55ba7c-fbb5-4a30-816a-6d0bbe9f6b6a" containerName="registry-server" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.566457 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.568411 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.569305 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.570307 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.578064 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-lcqlj"] Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.628760 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.629001 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.629069 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ssh-key\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.629186 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stzc9\" (UniqueName: \"kubernetes.io/projected/b017bff1-afde-427d-b072-6adef53f7daf-kube-api-access-stzc9\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.629304 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.629441 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-inventory\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.629486 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.732066 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ssh-key\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.732154 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stzc9\" (UniqueName: \"kubernetes.io/projected/b017bff1-afde-427d-b072-6adef53f7daf-kube-api-access-stzc9\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.732334 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.732580 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-inventory\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.732648 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.732791 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.732865 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.738343 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ssh-key\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.738376 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.739168 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.741823 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.742618 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.743009 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-inventory\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.761029 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stzc9\" (UniqueName: \"kubernetes.io/projected/b017bff1-afde-427d-b072-6adef53f7daf-kube-api-access-stzc9\") pod \"telemetry-openstack-openstack-cell1-lcqlj\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:25 crc kubenswrapper[5058]: I1014 09:41:25.894760 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:41:26 crc kubenswrapper[5058]: I1014 09:41:26.509970 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-lcqlj"] Oct 14 09:41:27 crc kubenswrapper[5058]: I1014 09:41:27.468160 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" event={"ID":"b017bff1-afde-427d-b072-6adef53f7daf","Type":"ContainerStarted","Data":"45a81af600de987e27692c8f25b718af566e927667a887a1130a50f5e939cb7a"} Oct 14 09:41:28 crc kubenswrapper[5058]: I1014 09:41:28.484265 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" event={"ID":"b017bff1-afde-427d-b072-6adef53f7daf","Type":"ContainerStarted","Data":"a7601160b92e48a990640cc9a01ba55be8a78611d0ce15580a9e18c159572ea5"} Oct 14 09:41:28 crc kubenswrapper[5058]: I1014 09:41:28.516726 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" podStartSLOduration=2.864807142 podStartE2EDuration="3.516704403s" podCreationTimestamp="2025-10-14 09:41:25 +0000 UTC" firstStartedPulling="2025-10-14 09:41:26.52265105 +0000 UTC m=+10434.433734876" lastFinishedPulling="2025-10-14 09:41:27.174548291 +0000 UTC m=+10435.085632137" observedRunningTime="2025-10-14 09:41:28.506929245 +0000 UTC m=+10436.418013081" watchObservedRunningTime="2025-10-14 09:41:28.516704403 +0000 UTC m=+10436.427788229" Oct 14 09:41:33 crc kubenswrapper[5058]: I1014 09:41:33.655606 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:41:33 crc kubenswrapper[5058]: I1014 09:41:33.656316 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:42:03 crc kubenswrapper[5058]: I1014 09:42:03.656247 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:42:03 crc kubenswrapper[5058]: I1014 09:42:03.656971 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:42:03 crc kubenswrapper[5058]: I1014 09:42:03.657036 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:42:03 crc kubenswrapper[5058]: I1014 09:42:03.658345 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f0647e438d7d9cbf59051e98d856b9a2193fba42fa5cc942c1d18db50496b28"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:42:03 crc kubenswrapper[5058]: I1014 09:42:03.658462 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://7f0647e438d7d9cbf59051e98d856b9a2193fba42fa5cc942c1d18db50496b28" gracePeriod=600 Oct 14 09:42:04 crc kubenswrapper[5058]: I1014 09:42:04.012062 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="7f0647e438d7d9cbf59051e98d856b9a2193fba42fa5cc942c1d18db50496b28" exitCode=0 Oct 14 09:42:04 crc kubenswrapper[5058]: I1014 09:42:04.012204 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"7f0647e438d7d9cbf59051e98d856b9a2193fba42fa5cc942c1d18db50496b28"} Oct 14 09:42:04 crc kubenswrapper[5058]: I1014 09:42:04.012442 5058 scope.go:117] "RemoveContainer" containerID="011dfcfd249ddc3b99435e0d8b51eacee8228858c5d2d56f947680a716476b47" Oct 14 09:42:05 crc kubenswrapper[5058]: I1014 09:42:05.030172 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f"} Oct 14 09:42:14 crc kubenswrapper[5058]: I1014 09:42:14.157351 5058 generic.go:334] "Generic (PLEG): container finished" podID="42904e5c-c197-40d7-bf0d-157c20982f80" containerID="eeeb8114820007feb0561fb2da4795048ab21bdcb488fa2b998b2a395cb03603" exitCode=0 Oct 14 09:42:14 crc kubenswrapper[5058]: I1014 09:42:14.157478 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" event={"ID":"42904e5c-c197-40d7-bf0d-157c20982f80","Type":"ContainerDied","Data":"eeeb8114820007feb0561fb2da4795048ab21bdcb488fa2b998b2a395cb03603"} Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.648441 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.816901 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cells-global-config-0\") pod \"42904e5c-c197-40d7-bf0d-157c20982f80\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.817380 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-combined-ca-bundle\") pod \"42904e5c-c197-40d7-bf0d-157c20982f80\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.817493 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-1\") pod \"42904e5c-c197-40d7-bf0d-157c20982f80\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.817761 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-0\") pod \"42904e5c-c197-40d7-bf0d-157c20982f80\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.817930 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-1\") pod \"42904e5c-c197-40d7-bf0d-157c20982f80\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.818101 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4swvn\" (UniqueName: \"kubernetes.io/projected/42904e5c-c197-40d7-bf0d-157c20982f80-kube-api-access-4swvn\") pod \"42904e5c-c197-40d7-bf0d-157c20982f80\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.818151 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-0\") pod \"42904e5c-c197-40d7-bf0d-157c20982f80\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.818357 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-inventory\") pod \"42904e5c-c197-40d7-bf0d-157c20982f80\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.818391 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-ssh-key\") pod \"42904e5c-c197-40d7-bf0d-157c20982f80\" (UID: \"42904e5c-c197-40d7-bf0d-157c20982f80\") " Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.827154 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell2-combined-ca-bundle") pod "42904e5c-c197-40d7-bf0d-157c20982f80" (UID: "42904e5c-c197-40d7-bf0d-157c20982f80"). InnerVolumeSpecName "nova-cell2-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.830108 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42904e5c-c197-40d7-bf0d-157c20982f80-kube-api-access-4swvn" (OuterVolumeSpecName: "kube-api-access-4swvn") pod "42904e5c-c197-40d7-bf0d-157c20982f80" (UID: "42904e5c-c197-40d7-bf0d-157c20982f80"). InnerVolumeSpecName "kube-api-access-4swvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.847524 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "42904e5c-c197-40d7-bf0d-157c20982f80" (UID: "42904e5c-c197-40d7-bf0d-157c20982f80"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.848648 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-0" (OuterVolumeSpecName: "nova-cell2-compute-config-0") pod "42904e5c-c197-40d7-bf0d-157c20982f80" (UID: "42904e5c-c197-40d7-bf0d-157c20982f80"). InnerVolumeSpecName "nova-cell2-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.861347 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-inventory" (OuterVolumeSpecName: "inventory") pod "42904e5c-c197-40d7-bf0d-157c20982f80" (UID: "42904e5c-c197-40d7-bf0d-157c20982f80"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.868309 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-1" (OuterVolumeSpecName: "nova-cell2-compute-config-1") pod "42904e5c-c197-40d7-bf0d-157c20982f80" (UID: "42904e5c-c197-40d7-bf0d-157c20982f80"). InnerVolumeSpecName "nova-cell2-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.878235 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "42904e5c-c197-40d7-bf0d-157c20982f80" (UID: "42904e5c-c197-40d7-bf0d-157c20982f80"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.878857 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "42904e5c-c197-40d7-bf0d-157c20982f80" (UID: "42904e5c-c197-40d7-bf0d-157c20982f80"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.893949 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "42904e5c-c197-40d7-bf0d-157c20982f80" (UID: "42904e5c-c197-40d7-bf0d-157c20982f80"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.920350 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.920389 5058 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.920404 5058 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.920417 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4swvn\" (UniqueName: \"kubernetes.io/projected/42904e5c-c197-40d7-bf0d-157c20982f80-kube-api-access-4swvn\") on node \"crc\" DevicePath \"\"" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.920431 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.920444 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.920455 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.920469 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:42:15 crc kubenswrapper[5058]: I1014 09:42:15.920480 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42904e5c-c197-40d7-bf0d-157c20982f80-nova-cell2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.225303 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" event={"ID":"42904e5c-c197-40d7-bf0d-157c20982f80","Type":"ContainerDied","Data":"4106d8849f467d61c5b20473afb20eb37039da4d46b5fc5042916f7e18eba850"} Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.225362 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4106d8849f467d61c5b20473afb20eb37039da4d46b5fc5042916f7e18eba850" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.225367 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-openstack-openstack-cell2-j2jsg" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.369371 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell2-7vqj9"] Oct 14 09:42:16 crc kubenswrapper[5058]: E1014 09:42:16.370411 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42904e5c-c197-40d7-bf0d-157c20982f80" containerName="nova-cell2-openstack-openstack-cell2" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.370459 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="42904e5c-c197-40d7-bf0d-157c20982f80" containerName="nova-cell2-openstack-openstack-cell2" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.370985 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="42904e5c-c197-40d7-bf0d-157c20982f80" containerName="nova-cell2-openstack-openstack-cell2" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.372439 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.375375 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.375760 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.380687 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell2-7vqj9"] Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.431187 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.431393 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.431581 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.431788 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-inventory\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.431929 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlmd8\" (UniqueName: \"kubernetes.io/projected/ba866ea4-9da5-4c06-9832-80094dd123a3-kube-api-access-hlmd8\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.431964 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ssh-key\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.432018 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.534299 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.534489 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-inventory\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.534584 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlmd8\" (UniqueName: \"kubernetes.io/projected/ba866ea4-9da5-4c06-9832-80094dd123a3-kube-api-access-hlmd8\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.534621 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ssh-key\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.534670 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.534720 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.534790 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.546950 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-inventory\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.550109 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.553671 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.570289 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.586159 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ssh-key\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.587876 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.589345 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlmd8\" (UniqueName: \"kubernetes.io/projected/ba866ea4-9da5-4c06-9832-80094dd123a3-kube-api-access-hlmd8\") pod \"telemetry-openstack-openstack-cell2-7vqj9\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:16 crc kubenswrapper[5058]: I1014 09:42:16.708825 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:42:17 crc kubenswrapper[5058]: I1014 09:42:17.305452 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell2-7vqj9"] Oct 14 09:42:17 crc kubenswrapper[5058]: W1014 09:42:17.315358 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba866ea4_9da5_4c06_9832_80094dd123a3.slice/crio-955cb44e39df8ff1ba0d80954e036f7ab8e16ec4c35a8bb8481b117de17a03e1 WatchSource:0}: Error finding container 955cb44e39df8ff1ba0d80954e036f7ab8e16ec4c35a8bb8481b117de17a03e1: Status 404 returned error can't find the container with id 955cb44e39df8ff1ba0d80954e036f7ab8e16ec4c35a8bb8481b117de17a03e1 Oct 14 09:42:18 crc kubenswrapper[5058]: I1014 09:42:18.258060 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" event={"ID":"ba866ea4-9da5-4c06-9832-80094dd123a3","Type":"ContainerStarted","Data":"955cb44e39df8ff1ba0d80954e036f7ab8e16ec4c35a8bb8481b117de17a03e1"} Oct 14 09:42:19 crc kubenswrapper[5058]: I1014 09:42:19.273899 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" event={"ID":"ba866ea4-9da5-4c06-9832-80094dd123a3","Type":"ContainerStarted","Data":"2fa19db63d521de91bed74d9c8c887bc4d7c6e04bb5be6ff0882ee0ead1e0a97"} Oct 14 09:42:19 crc kubenswrapper[5058]: I1014 09:42:19.316812 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" podStartSLOduration=2.403820787 podStartE2EDuration="3.316782743s" podCreationTimestamp="2025-10-14 09:42:16 +0000 UTC" firstStartedPulling="2025-10-14 09:42:17.318198932 +0000 UTC m=+10485.229282778" lastFinishedPulling="2025-10-14 09:42:18.231160918 +0000 UTC m=+10486.142244734" observedRunningTime="2025-10-14 09:42:19.30438558 +0000 UTC m=+10487.215469406" watchObservedRunningTime="2025-10-14 09:42:19.316782743 +0000 UTC m=+10487.227866539" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.192587 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tmbxb"] Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.197430 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.221863 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmbxb"] Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.307352 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-catalog-content\") pod \"redhat-operators-tmbxb\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.307422 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtg8n\" (UniqueName: \"kubernetes.io/projected/2b41cace-8d7c-40ff-be08-4b54c686d3fd-kube-api-access-xtg8n\") pod \"redhat-operators-tmbxb\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.307608 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-utilities\") pod \"redhat-operators-tmbxb\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.409439 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-catalog-content\") pod \"redhat-operators-tmbxb\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.409498 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtg8n\" (UniqueName: \"kubernetes.io/projected/2b41cace-8d7c-40ff-be08-4b54c686d3fd-kube-api-access-xtg8n\") pod \"redhat-operators-tmbxb\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.409572 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-utilities\") pod \"redhat-operators-tmbxb\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.410040 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-utilities\") pod \"redhat-operators-tmbxb\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.410646 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-catalog-content\") pod \"redhat-operators-tmbxb\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.438584 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtg8n\" (UniqueName: \"kubernetes.io/projected/2b41cace-8d7c-40ff-be08-4b54c686d3fd-kube-api-access-xtg8n\") pod \"redhat-operators-tmbxb\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:31 crc kubenswrapper[5058]: I1014 09:44:31.537525 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:32 crc kubenswrapper[5058]: I1014 09:44:32.082236 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmbxb"] Oct 14 09:44:33 crc kubenswrapper[5058]: I1014 09:44:33.048881 5058 generic.go:334] "Generic (PLEG): container finished" podID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerID="3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5" exitCode=0 Oct 14 09:44:33 crc kubenswrapper[5058]: I1014 09:44:33.049060 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmbxb" event={"ID":"2b41cace-8d7c-40ff-be08-4b54c686d3fd","Type":"ContainerDied","Data":"3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5"} Oct 14 09:44:33 crc kubenswrapper[5058]: I1014 09:44:33.050927 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmbxb" event={"ID":"2b41cace-8d7c-40ff-be08-4b54c686d3fd","Type":"ContainerStarted","Data":"61cd5ae20075812e413cd3e3790a990192ed58f192f484042e5d3bab91d48844"} Oct 14 09:44:33 crc kubenswrapper[5058]: I1014 09:44:33.051871 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:44:33 crc kubenswrapper[5058]: I1014 09:44:33.655713 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:44:33 crc kubenswrapper[5058]: I1014 09:44:33.656148 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:44:35 crc kubenswrapper[5058]: I1014 09:44:35.098031 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmbxb" event={"ID":"2b41cace-8d7c-40ff-be08-4b54c686d3fd","Type":"ContainerStarted","Data":"54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7"} Oct 14 09:44:38 crc kubenswrapper[5058]: I1014 09:44:38.141654 5058 generic.go:334] "Generic (PLEG): container finished" podID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerID="54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7" exitCode=0 Oct 14 09:44:38 crc kubenswrapper[5058]: I1014 09:44:38.141713 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmbxb" event={"ID":"2b41cace-8d7c-40ff-be08-4b54c686d3fd","Type":"ContainerDied","Data":"54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7"} Oct 14 09:44:39 crc kubenswrapper[5058]: I1014 09:44:39.157991 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmbxb" event={"ID":"2b41cace-8d7c-40ff-be08-4b54c686d3fd","Type":"ContainerStarted","Data":"ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2"} Oct 14 09:44:39 crc kubenswrapper[5058]: I1014 09:44:39.187261 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tmbxb" podStartSLOduration=2.46251524 podStartE2EDuration="8.187242095s" podCreationTimestamp="2025-10-14 09:44:31 +0000 UTC" firstStartedPulling="2025-10-14 09:44:33.051448049 +0000 UTC m=+10620.962531855" lastFinishedPulling="2025-10-14 09:44:38.776174874 +0000 UTC m=+10626.687258710" observedRunningTime="2025-10-14 09:44:39.183492288 +0000 UTC m=+10627.094576104" watchObservedRunningTime="2025-10-14 09:44:39.187242095 +0000 UTC m=+10627.098325901" Oct 14 09:44:41 crc kubenswrapper[5058]: I1014 09:44:41.538334 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:41 crc kubenswrapper[5058]: I1014 09:44:41.538904 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:42 crc kubenswrapper[5058]: I1014 09:44:42.610886 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tmbxb" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerName="registry-server" probeResult="failure" output=< Oct 14 09:44:42 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 09:44:42 crc kubenswrapper[5058]: > Oct 14 09:44:51 crc kubenswrapper[5058]: I1014 09:44:51.654269 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:51 crc kubenswrapper[5058]: I1014 09:44:51.708641 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:51 crc kubenswrapper[5058]: I1014 09:44:51.909554 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmbxb"] Oct 14 09:44:53 crc kubenswrapper[5058]: I1014 09:44:53.371029 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tmbxb" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerName="registry-server" containerID="cri-o://ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2" gracePeriod=2 Oct 14 09:44:53 crc kubenswrapper[5058]: I1014 09:44:53.875317 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.063843 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-catalog-content\") pod \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.063920 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtg8n\" (UniqueName: \"kubernetes.io/projected/2b41cace-8d7c-40ff-be08-4b54c686d3fd-kube-api-access-xtg8n\") pod \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.064086 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-utilities\") pod \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\" (UID: \"2b41cace-8d7c-40ff-be08-4b54c686d3fd\") " Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.065241 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-utilities" (OuterVolumeSpecName: "utilities") pod "2b41cace-8d7c-40ff-be08-4b54c686d3fd" (UID: "2b41cace-8d7c-40ff-be08-4b54c686d3fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.073789 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b41cace-8d7c-40ff-be08-4b54c686d3fd-kube-api-access-xtg8n" (OuterVolumeSpecName: "kube-api-access-xtg8n") pod "2b41cace-8d7c-40ff-be08-4b54c686d3fd" (UID: "2b41cace-8d7c-40ff-be08-4b54c686d3fd"). InnerVolumeSpecName "kube-api-access-xtg8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.166273 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtg8n\" (UniqueName: \"kubernetes.io/projected/2b41cace-8d7c-40ff-be08-4b54c686d3fd-kube-api-access-xtg8n\") on node \"crc\" DevicePath \"\"" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.166304 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.172544 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b41cace-8d7c-40ff-be08-4b54c686d3fd" (UID: "2b41cace-8d7c-40ff-be08-4b54c686d3fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.268670 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b41cace-8d7c-40ff-be08-4b54c686d3fd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.389098 5058 generic.go:334] "Generic (PLEG): container finished" podID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerID="ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2" exitCode=0 Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.389175 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmbxb" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.389221 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmbxb" event={"ID":"2b41cace-8d7c-40ff-be08-4b54c686d3fd","Type":"ContainerDied","Data":"ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2"} Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.390132 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmbxb" event={"ID":"2b41cace-8d7c-40ff-be08-4b54c686d3fd","Type":"ContainerDied","Data":"61cd5ae20075812e413cd3e3790a990192ed58f192f484042e5d3bab91d48844"} Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.390172 5058 scope.go:117] "RemoveContainer" containerID="ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.429520 5058 scope.go:117] "RemoveContainer" containerID="54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.450626 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmbxb"] Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.461759 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tmbxb"] Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.466525 5058 scope.go:117] "RemoveContainer" containerID="3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.541071 5058 scope.go:117] "RemoveContainer" containerID="ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2" Oct 14 09:44:54 crc kubenswrapper[5058]: E1014 09:44:54.541701 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2\": container with ID starting with ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2 not found: ID does not exist" containerID="ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.541773 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2"} err="failed to get container status \"ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2\": rpc error: code = NotFound desc = could not find container \"ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2\": container with ID starting with ab601c00ff7b4b6f37d3af7a74d0182bb0d9cc33e7d5454ec4a28c05302e2dc2 not found: ID does not exist" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.541849 5058 scope.go:117] "RemoveContainer" containerID="54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7" Oct 14 09:44:54 crc kubenswrapper[5058]: E1014 09:44:54.542230 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7\": container with ID starting with 54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7 not found: ID does not exist" containerID="54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.542277 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7"} err="failed to get container status \"54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7\": rpc error: code = NotFound desc = could not find container \"54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7\": container with ID starting with 54bcc7f354f90b9bbea8fd7b060f56c2cdffaf8453b152fbc44ee7e165237ac7 not found: ID does not exist" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.542311 5058 scope.go:117] "RemoveContainer" containerID="3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5" Oct 14 09:44:54 crc kubenswrapper[5058]: E1014 09:44:54.542726 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5\": container with ID starting with 3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5 not found: ID does not exist" containerID="3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.542772 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5"} err="failed to get container status \"3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5\": rpc error: code = NotFound desc = could not find container \"3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5\": container with ID starting with 3c59fa2cc5a1cd5881181ebcaf15d4d7473b96c66c00352359e5ae16c9bbc1e5 not found: ID does not exist" Oct 14 09:44:54 crc kubenswrapper[5058]: I1014 09:44:54.803162 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" path="/var/lib/kubelet/pods/2b41cace-8d7c-40ff-be08-4b54c686d3fd/volumes" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.175727 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd"] Oct 14 09:45:00 crc kubenswrapper[5058]: E1014 09:45:00.176845 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerName="extract-utilities" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.176863 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerName="extract-utilities" Oct 14 09:45:00 crc kubenswrapper[5058]: E1014 09:45:00.176884 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerName="extract-content" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.176892 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerName="extract-content" Oct 14 09:45:00 crc kubenswrapper[5058]: E1014 09:45:00.176942 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerName="registry-server" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.176950 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerName="registry-server" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.177218 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b41cace-8d7c-40ff-be08-4b54c686d3fd" containerName="registry-server" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.178091 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.187220 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.188105 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.200321 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd"] Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.311870 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ce44cb3-1d64-4749-84c1-5492175b4472-secret-volume\") pod \"collect-profiles-29340585-hmjqd\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.313639 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzbwl\" (UniqueName: \"kubernetes.io/projected/8ce44cb3-1d64-4749-84c1-5492175b4472-kube-api-access-mzbwl\") pod \"collect-profiles-29340585-hmjqd\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.313778 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ce44cb3-1d64-4749-84c1-5492175b4472-config-volume\") pod \"collect-profiles-29340585-hmjqd\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.416557 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ce44cb3-1d64-4749-84c1-5492175b4472-secret-volume\") pod \"collect-profiles-29340585-hmjqd\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.416726 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzbwl\" (UniqueName: \"kubernetes.io/projected/8ce44cb3-1d64-4749-84c1-5492175b4472-kube-api-access-mzbwl\") pod \"collect-profiles-29340585-hmjqd\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.416833 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ce44cb3-1d64-4749-84c1-5492175b4472-config-volume\") pod \"collect-profiles-29340585-hmjqd\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:00 crc kubenswrapper[5058]: I1014 09:45:00.417826 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ce44cb3-1d64-4749-84c1-5492175b4472-config-volume\") pod \"collect-profiles-29340585-hmjqd\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:01 crc kubenswrapper[5058]: I1014 09:45:01.092504 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ce44cb3-1d64-4749-84c1-5492175b4472-secret-volume\") pod \"collect-profiles-29340585-hmjqd\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:01 crc kubenswrapper[5058]: I1014 09:45:01.103571 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzbwl\" (UniqueName: \"kubernetes.io/projected/8ce44cb3-1d64-4749-84c1-5492175b4472-kube-api-access-mzbwl\") pod \"collect-profiles-29340585-hmjqd\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:01 crc kubenswrapper[5058]: I1014 09:45:01.106113 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:01 crc kubenswrapper[5058]: I1014 09:45:01.668383 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd"] Oct 14 09:45:02 crc kubenswrapper[5058]: I1014 09:45:02.504329 5058 generic.go:334] "Generic (PLEG): container finished" podID="8ce44cb3-1d64-4749-84c1-5492175b4472" containerID="ded4c859ac08245f08e730f1ca8714bd45b346b766ff7575c5664067f2a7b965" exitCode=0 Oct 14 09:45:02 crc kubenswrapper[5058]: I1014 09:45:02.504694 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" event={"ID":"8ce44cb3-1d64-4749-84c1-5492175b4472","Type":"ContainerDied","Data":"ded4c859ac08245f08e730f1ca8714bd45b346b766ff7575c5664067f2a7b965"} Oct 14 09:45:02 crc kubenswrapper[5058]: I1014 09:45:02.504738 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" event={"ID":"8ce44cb3-1d64-4749-84c1-5492175b4472","Type":"ContainerStarted","Data":"898e5492cf4a75526c3b4a29c2d5524a412b8cb9d6e332a4cfda67923d8fb3fd"} Oct 14 09:45:03 crc kubenswrapper[5058]: I1014 09:45:03.656167 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:45:03 crc kubenswrapper[5058]: I1014 09:45:03.656479 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:45:03 crc kubenswrapper[5058]: I1014 09:45:03.938670 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.128598 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ce44cb3-1d64-4749-84c1-5492175b4472-config-volume\") pod \"8ce44cb3-1d64-4749-84c1-5492175b4472\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.128920 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ce44cb3-1d64-4749-84c1-5492175b4472-secret-volume\") pod \"8ce44cb3-1d64-4749-84c1-5492175b4472\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.128973 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzbwl\" (UniqueName: \"kubernetes.io/projected/8ce44cb3-1d64-4749-84c1-5492175b4472-kube-api-access-mzbwl\") pod \"8ce44cb3-1d64-4749-84c1-5492175b4472\" (UID: \"8ce44cb3-1d64-4749-84c1-5492175b4472\") " Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.131654 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce44cb3-1d64-4749-84c1-5492175b4472-config-volume" (OuterVolumeSpecName: "config-volume") pod "8ce44cb3-1d64-4749-84c1-5492175b4472" (UID: "8ce44cb3-1d64-4749-84c1-5492175b4472"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.137179 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce44cb3-1d64-4749-84c1-5492175b4472-kube-api-access-mzbwl" (OuterVolumeSpecName: "kube-api-access-mzbwl") pod "8ce44cb3-1d64-4749-84c1-5492175b4472" (UID: "8ce44cb3-1d64-4749-84c1-5492175b4472"). InnerVolumeSpecName "kube-api-access-mzbwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.137198 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce44cb3-1d64-4749-84c1-5492175b4472-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8ce44cb3-1d64-4749-84c1-5492175b4472" (UID: "8ce44cb3-1d64-4749-84c1-5492175b4472"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.231613 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ce44cb3-1d64-4749-84c1-5492175b4472-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.231929 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ce44cb3-1d64-4749-84c1-5492175b4472-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.231997 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzbwl\" (UniqueName: \"kubernetes.io/projected/8ce44cb3-1d64-4749-84c1-5492175b4472-kube-api-access-mzbwl\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.527973 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" event={"ID":"8ce44cb3-1d64-4749-84c1-5492175b4472","Type":"ContainerDied","Data":"898e5492cf4a75526c3b4a29c2d5524a412b8cb9d6e332a4cfda67923d8fb3fd"} Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.528014 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="898e5492cf4a75526c3b4a29c2d5524a412b8cb9d6e332a4cfda67923d8fb3fd" Oct 14 09:45:04 crc kubenswrapper[5058]: I1014 09:45:04.528045 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340585-hmjqd" Oct 14 09:45:05 crc kubenswrapper[5058]: I1014 09:45:05.024473 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98"] Oct 14 09:45:05 crc kubenswrapper[5058]: I1014 09:45:05.033784 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340540-x4m98"] Oct 14 09:45:06 crc kubenswrapper[5058]: I1014 09:45:06.814104 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b947c4d-a2b8-4607-adb9-e54db3b99255" path="/var/lib/kubelet/pods/4b947c4d-a2b8-4607-adb9-e54db3b99255/volumes" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.636285 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cddgr"] Oct 14 09:45:33 crc kubenswrapper[5058]: E1014 09:45:33.637983 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce44cb3-1d64-4749-84c1-5492175b4472" containerName="collect-profiles" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.638022 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce44cb3-1d64-4749-84c1-5492175b4472" containerName="collect-profiles" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.638749 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce44cb3-1d64-4749-84c1-5492175b4472" containerName="collect-profiles" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.642675 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.650470 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cddgr"] Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.656244 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.656308 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.656361 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.657262 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.657338 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" gracePeriod=600 Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.753592 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-utilities\") pod \"redhat-marketplace-cddgr\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.753715 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbjdg\" (UniqueName: \"kubernetes.io/projected/b4e98965-5016-4413-b28c-ea5d7f3740da-kube-api-access-mbjdg\") pod \"redhat-marketplace-cddgr\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.753893 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-catalog-content\") pod \"redhat-marketplace-cddgr\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: E1014 09:45:33.788165 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.856241 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-catalog-content\") pod \"redhat-marketplace-cddgr\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.856710 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-catalog-content\") pod \"redhat-marketplace-cddgr\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.857002 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-utilities\") pod \"redhat-marketplace-cddgr\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.857141 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbjdg\" (UniqueName: \"kubernetes.io/projected/b4e98965-5016-4413-b28c-ea5d7f3740da-kube-api-access-mbjdg\") pod \"redhat-marketplace-cddgr\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.857514 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-utilities\") pod \"redhat-marketplace-cddgr\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.877492 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbjdg\" (UniqueName: \"kubernetes.io/projected/b4e98965-5016-4413-b28c-ea5d7f3740da-kube-api-access-mbjdg\") pod \"redhat-marketplace-cddgr\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.919747 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" exitCode=0 Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.919820 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f"} Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.919857 5058 scope.go:117] "RemoveContainer" containerID="7f0647e438d7d9cbf59051e98d856b9a2193fba42fa5cc942c1d18db50496b28" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.920750 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:45:33 crc kubenswrapper[5058]: E1014 09:45:33.921062 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:45:33 crc kubenswrapper[5058]: I1014 09:45:33.978957 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:34 crc kubenswrapper[5058]: I1014 09:45:34.434861 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cddgr"] Oct 14 09:45:34 crc kubenswrapper[5058]: I1014 09:45:34.939997 5058 generic.go:334] "Generic (PLEG): container finished" podID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerID="de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518" exitCode=0 Oct 14 09:45:34 crc kubenswrapper[5058]: I1014 09:45:34.940115 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cddgr" event={"ID":"b4e98965-5016-4413-b28c-ea5d7f3740da","Type":"ContainerDied","Data":"de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518"} Oct 14 09:45:34 crc kubenswrapper[5058]: I1014 09:45:34.940380 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cddgr" event={"ID":"b4e98965-5016-4413-b28c-ea5d7f3740da","Type":"ContainerStarted","Data":"144a8d5ba8e6c4898a93cd8711dc204a95456020d4054ac60f86e03342843126"} Oct 14 09:45:35 crc kubenswrapper[5058]: I1014 09:45:35.955891 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cddgr" event={"ID":"b4e98965-5016-4413-b28c-ea5d7f3740da","Type":"ContainerStarted","Data":"07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9"} Oct 14 09:45:35 crc kubenswrapper[5058]: I1014 09:45:35.958272 5058 generic.go:334] "Generic (PLEG): container finished" podID="b017bff1-afde-427d-b072-6adef53f7daf" containerID="a7601160b92e48a990640cc9a01ba55be8a78611d0ce15580a9e18c159572ea5" exitCode=0 Oct 14 09:45:35 crc kubenswrapper[5058]: I1014 09:45:35.958356 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" event={"ID":"b017bff1-afde-427d-b072-6adef53f7daf","Type":"ContainerDied","Data":"a7601160b92e48a990640cc9a01ba55be8a78611d0ce15580a9e18c159572ea5"} Oct 14 09:45:36 crc kubenswrapper[5058]: I1014 09:45:36.976209 5058 generic.go:334] "Generic (PLEG): container finished" podID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerID="07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9" exitCode=0 Oct 14 09:45:36 crc kubenswrapper[5058]: I1014 09:45:36.976263 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cddgr" event={"ID":"b4e98965-5016-4413-b28c-ea5d7f3740da","Type":"ContainerDied","Data":"07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9"} Oct 14 09:45:37 crc kubenswrapper[5058]: I1014 09:45:37.992873 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" event={"ID":"b017bff1-afde-427d-b072-6adef53f7daf","Type":"ContainerDied","Data":"45a81af600de987e27692c8f25b718af566e927667a887a1130a50f5e939cb7a"} Oct 14 09:45:37 crc kubenswrapper[5058]: I1014 09:45:37.993173 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a81af600de987e27692c8f25b718af566e927667a887a1130a50f5e939cb7a" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.035196 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.162703 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-2\") pod \"b017bff1-afde-427d-b072-6adef53f7daf\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.163139 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-inventory\") pod \"b017bff1-afde-427d-b072-6adef53f7daf\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.163175 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stzc9\" (UniqueName: \"kubernetes.io/projected/b017bff1-afde-427d-b072-6adef53f7daf-kube-api-access-stzc9\") pod \"b017bff1-afde-427d-b072-6adef53f7daf\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.163229 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-0\") pod \"b017bff1-afde-427d-b072-6adef53f7daf\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.163330 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-telemetry-combined-ca-bundle\") pod \"b017bff1-afde-427d-b072-6adef53f7daf\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.163356 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ssh-key\") pod \"b017bff1-afde-427d-b072-6adef53f7daf\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.163459 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-1\") pod \"b017bff1-afde-427d-b072-6adef53f7daf\" (UID: \"b017bff1-afde-427d-b072-6adef53f7daf\") " Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.171922 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b017bff1-afde-427d-b072-6adef53f7daf-kube-api-access-stzc9" (OuterVolumeSpecName: "kube-api-access-stzc9") pod "b017bff1-afde-427d-b072-6adef53f7daf" (UID: "b017bff1-afde-427d-b072-6adef53f7daf"). InnerVolumeSpecName "kube-api-access-stzc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.172851 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b017bff1-afde-427d-b072-6adef53f7daf" (UID: "b017bff1-afde-427d-b072-6adef53f7daf"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.205038 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-inventory" (OuterVolumeSpecName: "inventory") pod "b017bff1-afde-427d-b072-6adef53f7daf" (UID: "b017bff1-afde-427d-b072-6adef53f7daf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.205522 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b017bff1-afde-427d-b072-6adef53f7daf" (UID: "b017bff1-afde-427d-b072-6adef53f7daf"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.208285 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b017bff1-afde-427d-b072-6adef53f7daf" (UID: "b017bff1-afde-427d-b072-6adef53f7daf"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.215439 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b017bff1-afde-427d-b072-6adef53f7daf" (UID: "b017bff1-afde-427d-b072-6adef53f7daf"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.217336 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b017bff1-afde-427d-b072-6adef53f7daf" (UID: "b017bff1-afde-427d-b072-6adef53f7daf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.265865 5058 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.265890 5058 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.265901 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.265910 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stzc9\" (UniqueName: \"kubernetes.io/projected/b017bff1-afde-427d-b072-6adef53f7daf-kube-api-access-stzc9\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.266097 5058 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.266105 5058 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:38 crc kubenswrapper[5058]: I1014 09:45:38.266115 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b017bff1-afde-427d-b072-6adef53f7daf-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.033043 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-lcqlj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.033909 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cddgr" event={"ID":"b4e98965-5016-4413-b28c-ea5d7f3740da","Type":"ContainerStarted","Data":"587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471"} Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.071952 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cddgr" podStartSLOduration=3.178379823 podStartE2EDuration="6.071934967s" podCreationTimestamp="2025-10-14 09:45:33 +0000 UTC" firstStartedPulling="2025-10-14 09:45:34.942284897 +0000 UTC m=+10682.853368723" lastFinishedPulling="2025-10-14 09:45:37.835840031 +0000 UTC m=+10685.746923867" observedRunningTime="2025-10-14 09:45:39.057337792 +0000 UTC m=+10686.968421618" watchObservedRunningTime="2025-10-14 09:45:39.071934967 +0000 UTC m=+10686.983018783" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.146726 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-bd2pj"] Oct 14 09:45:39 crc kubenswrapper[5058]: E1014 09:45:39.147270 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b017bff1-afde-427d-b072-6adef53f7daf" containerName="telemetry-openstack-openstack-cell1" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.147293 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b017bff1-afde-427d-b072-6adef53f7daf" containerName="telemetry-openstack-openstack-cell1" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.147617 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b017bff1-afde-427d-b072-6adef53f7daf" containerName="telemetry-openstack-openstack-cell1" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.148537 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.150711 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.151336 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.151543 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.163742 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-bd2pj"] Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.292015 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.292307 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.292486 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.292623 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ccn\" (UniqueName: \"kubernetes.io/projected/13b04ee2-dd54-4eb5-ba47-485bf575545c-kube-api-access-95ccn\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.292856 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.394445 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.394529 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ccn\" (UniqueName: \"kubernetes.io/projected/13b04ee2-dd54-4eb5-ba47-485bf575545c-kube-api-access-95ccn\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.394663 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.394715 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.394742 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.401481 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.401603 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.401822 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.402034 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.412285 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ccn\" (UniqueName: \"kubernetes.io/projected/13b04ee2-dd54-4eb5-ba47-485bf575545c-kube-api-access-95ccn\") pod \"neutron-sriov-openstack-openstack-cell1-bd2pj\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:39 crc kubenswrapper[5058]: I1014 09:45:39.505838 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:45:40 crc kubenswrapper[5058]: I1014 09:45:40.160792 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-bd2pj"] Oct 14 09:45:41 crc kubenswrapper[5058]: I1014 09:45:41.064174 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" event={"ID":"13b04ee2-dd54-4eb5-ba47-485bf575545c","Type":"ContainerStarted","Data":"5d3a70eaba16ee039bb607284241057c048b335a32ec60c3dbfcb13e2d602c5d"} Oct 14 09:45:42 crc kubenswrapper[5058]: I1014 09:45:42.079167 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" event={"ID":"13b04ee2-dd54-4eb5-ba47-485bf575545c","Type":"ContainerStarted","Data":"1f7b00accdd069d7173f794d37b62331fed3116de2bcb7176b432d20894d2e2e"} Oct 14 09:45:42 crc kubenswrapper[5058]: I1014 09:45:42.119586 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" podStartSLOduration=2.488048733 podStartE2EDuration="3.119562584s" podCreationTimestamp="2025-10-14 09:45:39 +0000 UTC" firstStartedPulling="2025-10-14 09:45:40.505755065 +0000 UTC m=+10688.416838911" lastFinishedPulling="2025-10-14 09:45:41.137268916 +0000 UTC m=+10689.048352762" observedRunningTime="2025-10-14 09:45:42.102092517 +0000 UTC m=+10690.013176333" watchObservedRunningTime="2025-10-14 09:45:42.119562584 +0000 UTC m=+10690.030646400" Oct 14 09:45:43 crc kubenswrapper[5058]: I1014 09:45:43.980198 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:43 crc kubenswrapper[5058]: I1014 09:45:43.980579 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:44 crc kubenswrapper[5058]: I1014 09:45:44.069347 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:44 crc kubenswrapper[5058]: I1014 09:45:44.189254 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:44 crc kubenswrapper[5058]: I1014 09:45:44.314779 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cddgr"] Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.144790 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cddgr" podUID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerName="registry-server" containerID="cri-o://587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471" gracePeriod=2 Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.727773 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.791016 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:45:46 crc kubenswrapper[5058]: E1014 09:45:46.791484 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.893948 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbjdg\" (UniqueName: \"kubernetes.io/projected/b4e98965-5016-4413-b28c-ea5d7f3740da-kube-api-access-mbjdg\") pod \"b4e98965-5016-4413-b28c-ea5d7f3740da\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.894015 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-catalog-content\") pod \"b4e98965-5016-4413-b28c-ea5d7f3740da\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.894092 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-utilities\") pod \"b4e98965-5016-4413-b28c-ea5d7f3740da\" (UID: \"b4e98965-5016-4413-b28c-ea5d7f3740da\") " Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.895983 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-utilities" (OuterVolumeSpecName: "utilities") pod "b4e98965-5016-4413-b28c-ea5d7f3740da" (UID: "b4e98965-5016-4413-b28c-ea5d7f3740da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.906201 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e98965-5016-4413-b28c-ea5d7f3740da-kube-api-access-mbjdg" (OuterVolumeSpecName: "kube-api-access-mbjdg") pod "b4e98965-5016-4413-b28c-ea5d7f3740da" (UID: "b4e98965-5016-4413-b28c-ea5d7f3740da"). InnerVolumeSpecName "kube-api-access-mbjdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.914145 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4e98965-5016-4413-b28c-ea5d7f3740da" (UID: "b4e98965-5016-4413-b28c-ea5d7f3740da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.997512 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbjdg\" (UniqueName: \"kubernetes.io/projected/b4e98965-5016-4413-b28c-ea5d7f3740da-kube-api-access-mbjdg\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.997588 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:46 crc kubenswrapper[5058]: I1014 09:45:46.997618 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e98965-5016-4413-b28c-ea5d7f3740da-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.163839 5058 generic.go:334] "Generic (PLEG): container finished" podID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerID="587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471" exitCode=0 Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.163908 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cddgr" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.163898 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cddgr" event={"ID":"b4e98965-5016-4413-b28c-ea5d7f3740da","Type":"ContainerDied","Data":"587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471"} Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.164066 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cddgr" event={"ID":"b4e98965-5016-4413-b28c-ea5d7f3740da","Type":"ContainerDied","Data":"144a8d5ba8e6c4898a93cd8711dc204a95456020d4054ac60f86e03342843126"} Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.164097 5058 scope.go:117] "RemoveContainer" containerID="587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.202253 5058 scope.go:117] "RemoveContainer" containerID="07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.218396 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cddgr"] Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.239871 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cddgr"] Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.257127 5058 scope.go:117] "RemoveContainer" containerID="de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.303065 5058 scope.go:117] "RemoveContainer" containerID="587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471" Oct 14 09:45:47 crc kubenswrapper[5058]: E1014 09:45:47.303600 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471\": container with ID starting with 587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471 not found: ID does not exist" containerID="587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.303648 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471"} err="failed to get container status \"587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471\": rpc error: code = NotFound desc = could not find container \"587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471\": container with ID starting with 587f1a9a248486269ff4c38e0a3bb4846458b874525707068544dcd44eb8b471 not found: ID does not exist" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.303681 5058 scope.go:117] "RemoveContainer" containerID="07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9" Oct 14 09:45:47 crc kubenswrapper[5058]: E1014 09:45:47.304215 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9\": container with ID starting with 07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9 not found: ID does not exist" containerID="07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.304261 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9"} err="failed to get container status \"07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9\": rpc error: code = NotFound desc = could not find container \"07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9\": container with ID starting with 07fd6a2444d60a5d5380cbe3e8cc697dff4d06de29243e094cdea25de3ac1dc9 not found: ID does not exist" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.304287 5058 scope.go:117] "RemoveContainer" containerID="de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518" Oct 14 09:45:47 crc kubenswrapper[5058]: E1014 09:45:47.304711 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518\": container with ID starting with de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518 not found: ID does not exist" containerID="de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518" Oct 14 09:45:47 crc kubenswrapper[5058]: I1014 09:45:47.304785 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518"} err="failed to get container status \"de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518\": rpc error: code = NotFound desc = could not find container \"de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518\": container with ID starting with de959560bcbb753b05b7f4de1b800cf8298937f6b3ba19aa40c8b36d2520f518 not found: ID does not exist" Oct 14 09:45:48 crc kubenswrapper[5058]: I1014 09:45:48.812134 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e98965-5016-4413-b28c-ea5d7f3740da" path="/var/lib/kubelet/pods/b4e98965-5016-4413-b28c-ea5d7f3740da/volumes" Oct 14 09:45:49 crc kubenswrapper[5058]: I1014 09:45:49.623202 5058 scope.go:117] "RemoveContainer" containerID="f566e96ce24926c0dcbf38d0bf09e1229a5d56dd12210924e7c37ebbe451674b" Oct 14 09:46:00 crc kubenswrapper[5058]: I1014 09:46:00.790150 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:46:00 crc kubenswrapper[5058]: E1014 09:46:00.791002 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:46:13 crc kubenswrapper[5058]: I1014 09:46:13.789648 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:46:13 crc kubenswrapper[5058]: E1014 09:46:13.790789 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:46:24 crc kubenswrapper[5058]: I1014 09:46:24.790611 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:46:24 crc kubenswrapper[5058]: E1014 09:46:24.791884 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:46:33 crc kubenswrapper[5058]: I1014 09:46:33.892360 5058 generic.go:334] "Generic (PLEG): container finished" podID="ba866ea4-9da5-4c06-9832-80094dd123a3" containerID="2fa19db63d521de91bed74d9c8c887bc4d7c6e04bb5be6ff0882ee0ead1e0a97" exitCode=0 Oct 14 09:46:33 crc kubenswrapper[5058]: I1014 09:46:33.893111 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" event={"ID":"ba866ea4-9da5-4c06-9832-80094dd123a3","Type":"ContainerDied","Data":"2fa19db63d521de91bed74d9c8c887bc4d7c6e04bb5be6ff0882ee0ead1e0a97"} Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.466964 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.627825 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-telemetry-combined-ca-bundle\") pod \"ba866ea4-9da5-4c06-9832-80094dd123a3\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.628035 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlmd8\" (UniqueName: \"kubernetes.io/projected/ba866ea4-9da5-4c06-9832-80094dd123a3-kube-api-access-hlmd8\") pod \"ba866ea4-9da5-4c06-9832-80094dd123a3\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.628079 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-1\") pod \"ba866ea4-9da5-4c06-9832-80094dd123a3\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.628157 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-0\") pod \"ba866ea4-9da5-4c06-9832-80094dd123a3\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.628292 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ssh-key\") pod \"ba866ea4-9da5-4c06-9832-80094dd123a3\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.628317 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-inventory\") pod \"ba866ea4-9da5-4c06-9832-80094dd123a3\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.628344 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-2\") pod \"ba866ea4-9da5-4c06-9832-80094dd123a3\" (UID: \"ba866ea4-9da5-4c06-9832-80094dd123a3\") " Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.634263 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba866ea4-9da5-4c06-9832-80094dd123a3-kube-api-access-hlmd8" (OuterVolumeSpecName: "kube-api-access-hlmd8") pod "ba866ea4-9da5-4c06-9832-80094dd123a3" (UID: "ba866ea4-9da5-4c06-9832-80094dd123a3"). InnerVolumeSpecName "kube-api-access-hlmd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.635435 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ba866ea4-9da5-4c06-9832-80094dd123a3" (UID: "ba866ea4-9da5-4c06-9832-80094dd123a3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.664375 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ba866ea4-9da5-4c06-9832-80094dd123a3" (UID: "ba866ea4-9da5-4c06-9832-80094dd123a3"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.665998 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ba866ea4-9da5-4c06-9832-80094dd123a3" (UID: "ba866ea4-9da5-4c06-9832-80094dd123a3"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.666095 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-inventory" (OuterVolumeSpecName: "inventory") pod "ba866ea4-9da5-4c06-9832-80094dd123a3" (UID: "ba866ea4-9da5-4c06-9832-80094dd123a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.666991 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ba866ea4-9da5-4c06-9832-80094dd123a3" (UID: "ba866ea4-9da5-4c06-9832-80094dd123a3"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.668946 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ba866ea4-9da5-4c06-9832-80094dd123a3" (UID: "ba866ea4-9da5-4c06-9832-80094dd123a3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.730470 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlmd8\" (UniqueName: \"kubernetes.io/projected/ba866ea4-9da5-4c06-9832-80094dd123a3-kube-api-access-hlmd8\") on node \"crc\" DevicePath \"\"" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.730761 5058 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.730773 5058 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.730783 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.730804 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.730812 5058 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.730822 5058 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba866ea4-9da5-4c06-9832-80094dd123a3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.918351 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" event={"ID":"ba866ea4-9da5-4c06-9832-80094dd123a3","Type":"ContainerDied","Data":"955cb44e39df8ff1ba0d80954e036f7ab8e16ec4c35a8bb8481b117de17a03e1"} Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.918426 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955cb44e39df8ff1ba0d80954e036f7ab8e16ec4c35a8bb8481b117de17a03e1" Oct 14 09:46:35 crc kubenswrapper[5058]: I1014 09:46:35.918545 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell2-7vqj9" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.057369 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell2-dz9nm"] Oct 14 09:46:36 crc kubenswrapper[5058]: E1014 09:46:36.058240 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerName="extract-content" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.058272 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerName="extract-content" Oct 14 09:46:36 crc kubenswrapper[5058]: E1014 09:46:36.058313 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerName="extract-utilities" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.058333 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerName="extract-utilities" Oct 14 09:46:36 crc kubenswrapper[5058]: E1014 09:46:36.058412 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerName="registry-server" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.058434 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerName="registry-server" Oct 14 09:46:36 crc kubenswrapper[5058]: E1014 09:46:36.058478 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba866ea4-9da5-4c06-9832-80094dd123a3" containerName="telemetry-openstack-openstack-cell2" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.058497 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba866ea4-9da5-4c06-9832-80094dd123a3" containerName="telemetry-openstack-openstack-cell2" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.059144 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e98965-5016-4413-b28c-ea5d7f3740da" containerName="registry-server" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.059207 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba866ea4-9da5-4c06-9832-80094dd123a3" containerName="telemetry-openstack-openstack-cell2" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.060419 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.071774 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell2-dz9nm"] Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.092841 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.093257 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.249671 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.249869 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.249934 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.249985 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-inventory\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.250432 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w22fk\" (UniqueName: \"kubernetes.io/projected/b564386b-d09e-432c-aff9-00e4c40f8aac-kube-api-access-w22fk\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.353059 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.353159 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.353207 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.353249 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-inventory\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.353424 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w22fk\" (UniqueName: \"kubernetes.io/projected/b564386b-d09e-432c-aff9-00e4c40f8aac-kube-api-access-w22fk\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.358579 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-inventory\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.358736 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.360039 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.361699 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.386174 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w22fk\" (UniqueName: \"kubernetes.io/projected/b564386b-d09e-432c-aff9-00e4c40f8aac-kube-api-access-w22fk\") pod \"neutron-sriov-openstack-openstack-cell2-dz9nm\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:36 crc kubenswrapper[5058]: I1014 09:46:36.412926 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:46:37 crc kubenswrapper[5058]: I1014 09:46:37.065986 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell2-dz9nm"] Oct 14 09:46:37 crc kubenswrapper[5058]: W1014 09:46:37.072334 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb564386b_d09e_432c_aff9_00e4c40f8aac.slice/crio-7f6a46b23dc8c33db90e4d7d216671eed1f8e2d8f11841d917c5540239f1b77f WatchSource:0}: Error finding container 7f6a46b23dc8c33db90e4d7d216671eed1f8e2d8f11841d917c5540239f1b77f: Status 404 returned error can't find the container with id 7f6a46b23dc8c33db90e4d7d216671eed1f8e2d8f11841d917c5540239f1b77f Oct 14 09:46:37 crc kubenswrapper[5058]: I1014 09:46:37.791111 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:46:37 crc kubenswrapper[5058]: E1014 09:46:37.791939 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:46:37 crc kubenswrapper[5058]: I1014 09:46:37.952371 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" event={"ID":"b564386b-d09e-432c-aff9-00e4c40f8aac","Type":"ContainerStarted","Data":"7f6a46b23dc8c33db90e4d7d216671eed1f8e2d8f11841d917c5540239f1b77f"} Oct 14 09:46:38 crc kubenswrapper[5058]: I1014 09:46:38.963464 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" event={"ID":"b564386b-d09e-432c-aff9-00e4c40f8aac","Type":"ContainerStarted","Data":"6c94d491b17c59ebfa23558178d61980e4e83e6dfe4b35f687625ab0a84b40a4"} Oct 14 09:46:38 crc kubenswrapper[5058]: I1014 09:46:38.991335 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" podStartSLOduration=2.076025702 podStartE2EDuration="2.991311994s" podCreationTimestamp="2025-10-14 09:46:36 +0000 UTC" firstStartedPulling="2025-10-14 09:46:37.07631278 +0000 UTC m=+10744.987396596" lastFinishedPulling="2025-10-14 09:46:37.991599052 +0000 UTC m=+10745.902682888" observedRunningTime="2025-10-14 09:46:38.980417545 +0000 UTC m=+10746.891501391" watchObservedRunningTime="2025-10-14 09:46:38.991311994 +0000 UTC m=+10746.902395810" Oct 14 09:46:51 crc kubenswrapper[5058]: I1014 09:46:51.790968 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:46:51 crc kubenswrapper[5058]: E1014 09:46:51.792602 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:47:02 crc kubenswrapper[5058]: I1014 09:47:02.809752 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:47:02 crc kubenswrapper[5058]: E1014 09:47:02.811131 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:47:13 crc kubenswrapper[5058]: I1014 09:47:13.790599 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:47:13 crc kubenswrapper[5058]: E1014 09:47:13.791947 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:47:26 crc kubenswrapper[5058]: I1014 09:47:26.791350 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:47:26 crc kubenswrapper[5058]: E1014 09:47:26.792393 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:47:41 crc kubenswrapper[5058]: I1014 09:47:41.790992 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:47:41 crc kubenswrapper[5058]: E1014 09:47:41.792198 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:47:54 crc kubenswrapper[5058]: I1014 09:47:54.056823 5058 generic.go:334] "Generic (PLEG): container finished" podID="13b04ee2-dd54-4eb5-ba47-485bf575545c" containerID="1f7b00accdd069d7173f794d37b62331fed3116de2bcb7176b432d20894d2e2e" exitCode=0 Oct 14 09:47:54 crc kubenswrapper[5058]: I1014 09:47:54.056887 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" event={"ID":"13b04ee2-dd54-4eb5-ba47-485bf575545c","Type":"ContainerDied","Data":"1f7b00accdd069d7173f794d37b62331fed3116de2bcb7176b432d20894d2e2e"} Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.501136 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.566440 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-agent-neutron-config-0\") pod \"13b04ee2-dd54-4eb5-ba47-485bf575545c\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.566489 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95ccn\" (UniqueName: \"kubernetes.io/projected/13b04ee2-dd54-4eb5-ba47-485bf575545c-kube-api-access-95ccn\") pod \"13b04ee2-dd54-4eb5-ba47-485bf575545c\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.566538 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-ssh-key\") pod \"13b04ee2-dd54-4eb5-ba47-485bf575545c\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.566560 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-combined-ca-bundle\") pod \"13b04ee2-dd54-4eb5-ba47-485bf575545c\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.566587 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-inventory\") pod \"13b04ee2-dd54-4eb5-ba47-485bf575545c\" (UID: \"13b04ee2-dd54-4eb5-ba47-485bf575545c\") " Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.574986 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "13b04ee2-dd54-4eb5-ba47-485bf575545c" (UID: "13b04ee2-dd54-4eb5-ba47-485bf575545c"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.575162 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b04ee2-dd54-4eb5-ba47-485bf575545c-kube-api-access-95ccn" (OuterVolumeSpecName: "kube-api-access-95ccn") pod "13b04ee2-dd54-4eb5-ba47-485bf575545c" (UID: "13b04ee2-dd54-4eb5-ba47-485bf575545c"). InnerVolumeSpecName "kube-api-access-95ccn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.602178 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "13b04ee2-dd54-4eb5-ba47-485bf575545c" (UID: "13b04ee2-dd54-4eb5-ba47-485bf575545c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.607523 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "13b04ee2-dd54-4eb5-ba47-485bf575545c" (UID: "13b04ee2-dd54-4eb5-ba47-485bf575545c"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.621101 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-inventory" (OuterVolumeSpecName: "inventory") pod "13b04ee2-dd54-4eb5-ba47-485bf575545c" (UID: "13b04ee2-dd54-4eb5-ba47-485bf575545c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.669360 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.669407 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95ccn\" (UniqueName: \"kubernetes.io/projected/13b04ee2-dd54-4eb5-ba47-485bf575545c-kube-api-access-95ccn\") on node \"crc\" DevicePath \"\"" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.669421 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.669433 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:47:55 crc kubenswrapper[5058]: I1014 09:47:55.669447 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b04ee2-dd54-4eb5-ba47-485bf575545c-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.085554 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" event={"ID":"13b04ee2-dd54-4eb5-ba47-485bf575545c","Type":"ContainerDied","Data":"5d3a70eaba16ee039bb607284241057c048b335a32ec60c3dbfcb13e2d602c5d"} Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.085612 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d3a70eaba16ee039bb607284241057c048b335a32ec60c3dbfcb13e2d602c5d" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.086150 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-bd2pj" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.200458 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-4srrf"] Oct 14 09:47:56 crc kubenswrapper[5058]: E1014 09:47:56.201675 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b04ee2-dd54-4eb5-ba47-485bf575545c" containerName="neutron-sriov-openstack-openstack-cell1" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.201826 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b04ee2-dd54-4eb5-ba47-485bf575545c" containerName="neutron-sriov-openstack-openstack-cell1" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.202175 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b04ee2-dd54-4eb5-ba47-485bf575545c" containerName="neutron-sriov-openstack-openstack-cell1" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.203167 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.206424 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.206422 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.206487 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.223665 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-4srrf"] Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.282947 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.283026 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.283229 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.283357 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.283638 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfd8\" (UniqueName: \"kubernetes.io/projected/20a08639-e660-434d-89c5-581137866521-kube-api-access-hpfd8\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.385236 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.385299 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.385338 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.385368 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.385411 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpfd8\" (UniqueName: \"kubernetes.io/projected/20a08639-e660-434d-89c5-581137866521-kube-api-access-hpfd8\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.391500 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.391932 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.393016 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.396718 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.409950 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpfd8\" (UniqueName: \"kubernetes.io/projected/20a08639-e660-434d-89c5-581137866521-kube-api-access-hpfd8\") pod \"neutron-dhcp-openstack-openstack-cell1-4srrf\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.554395 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:47:56 crc kubenswrapper[5058]: I1014 09:47:56.790441 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:47:56 crc kubenswrapper[5058]: E1014 09:47:56.790996 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:47:57 crc kubenswrapper[5058]: I1014 09:47:57.128331 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-4srrf"] Oct 14 09:47:57 crc kubenswrapper[5058]: W1014 09:47:57.130380 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20a08639_e660_434d_89c5_581137866521.slice/crio-e0a13d5ad51f4e76a46666fc4b50735bc238167251c6a1c2218fd4538a748bc1 WatchSource:0}: Error finding container e0a13d5ad51f4e76a46666fc4b50735bc238167251c6a1c2218fd4538a748bc1: Status 404 returned error can't find the container with id e0a13d5ad51f4e76a46666fc4b50735bc238167251c6a1c2218fd4538a748bc1 Oct 14 09:47:58 crc kubenswrapper[5058]: I1014 09:47:58.109470 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" event={"ID":"20a08639-e660-434d-89c5-581137866521","Type":"ContainerStarted","Data":"9fa04c0bcb0ebe400b08676112980f4d3290ce802d6784c2fb29bb09baa7f541"} Oct 14 09:47:58 crc kubenswrapper[5058]: I1014 09:47:58.110463 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" event={"ID":"20a08639-e660-434d-89c5-581137866521","Type":"ContainerStarted","Data":"e0a13d5ad51f4e76a46666fc4b50735bc238167251c6a1c2218fd4538a748bc1"} Oct 14 09:47:58 crc kubenswrapper[5058]: I1014 09:47:58.139171 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" podStartSLOduration=1.627917673 podStartE2EDuration="2.139148103s" podCreationTimestamp="2025-10-14 09:47:56 +0000 UTC" firstStartedPulling="2025-10-14 09:47:57.132871893 +0000 UTC m=+10825.043955699" lastFinishedPulling="2025-10-14 09:47:57.644102283 +0000 UTC m=+10825.555186129" observedRunningTime="2025-10-14 09:47:58.125236357 +0000 UTC m=+10826.036320193" watchObservedRunningTime="2025-10-14 09:47:58.139148103 +0000 UTC m=+10826.050231919" Oct 14 09:48:11 crc kubenswrapper[5058]: I1014 09:48:11.790156 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:48:11 crc kubenswrapper[5058]: E1014 09:48:11.791585 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:48:22 crc kubenswrapper[5058]: I1014 09:48:22.814511 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:48:22 crc kubenswrapper[5058]: E1014 09:48:22.815897 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:48:36 crc kubenswrapper[5058]: I1014 09:48:36.790958 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:48:36 crc kubenswrapper[5058]: E1014 09:48:36.792048 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:48:37 crc kubenswrapper[5058]: I1014 09:48:37.636750 5058 generic.go:334] "Generic (PLEG): container finished" podID="b564386b-d09e-432c-aff9-00e4c40f8aac" containerID="6c94d491b17c59ebfa23558178d61980e4e83e6dfe4b35f687625ab0a84b40a4" exitCode=0 Oct 14 09:48:37 crc kubenswrapper[5058]: I1014 09:48:37.636833 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" event={"ID":"b564386b-d09e-432c-aff9-00e4c40f8aac","Type":"ContainerDied","Data":"6c94d491b17c59ebfa23558178d61980e4e83e6dfe4b35f687625ab0a84b40a4"} Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.379496 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.447220 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-inventory\") pod \"b564386b-d09e-432c-aff9-00e4c40f8aac\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.447320 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-ssh-key\") pod \"b564386b-d09e-432c-aff9-00e4c40f8aac\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.447448 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-combined-ca-bundle\") pod \"b564386b-d09e-432c-aff9-00e4c40f8aac\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.447487 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-agent-neutron-config-0\") pod \"b564386b-d09e-432c-aff9-00e4c40f8aac\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.447536 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w22fk\" (UniqueName: \"kubernetes.io/projected/b564386b-d09e-432c-aff9-00e4c40f8aac-kube-api-access-w22fk\") pod \"b564386b-d09e-432c-aff9-00e4c40f8aac\" (UID: \"b564386b-d09e-432c-aff9-00e4c40f8aac\") " Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.452253 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "b564386b-d09e-432c-aff9-00e4c40f8aac" (UID: "b564386b-d09e-432c-aff9-00e4c40f8aac"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.452893 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b564386b-d09e-432c-aff9-00e4c40f8aac-kube-api-access-w22fk" (OuterVolumeSpecName: "kube-api-access-w22fk") pod "b564386b-d09e-432c-aff9-00e4c40f8aac" (UID: "b564386b-d09e-432c-aff9-00e4c40f8aac"). InnerVolumeSpecName "kube-api-access-w22fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.475071 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-inventory" (OuterVolumeSpecName: "inventory") pod "b564386b-d09e-432c-aff9-00e4c40f8aac" (UID: "b564386b-d09e-432c-aff9-00e4c40f8aac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.476406 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b564386b-d09e-432c-aff9-00e4c40f8aac" (UID: "b564386b-d09e-432c-aff9-00e4c40f8aac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.482951 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "b564386b-d09e-432c-aff9-00e4c40f8aac" (UID: "b564386b-d09e-432c-aff9-00e4c40f8aac"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.550169 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w22fk\" (UniqueName: \"kubernetes.io/projected/b564386b-d09e-432c-aff9-00e4c40f8aac-kube-api-access-w22fk\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.550393 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.550404 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.550416 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.550426 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b564386b-d09e-432c-aff9-00e4c40f8aac-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.698025 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" event={"ID":"b564386b-d09e-432c-aff9-00e4c40f8aac","Type":"ContainerDied","Data":"7f6a46b23dc8c33db90e4d7d216671eed1f8e2d8f11841d917c5540239f1b77f"} Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.698064 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f6a46b23dc8c33db90e4d7d216671eed1f8e2d8f11841d917c5540239f1b77f" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.698076 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell2-dz9nm" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.793932 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell2-bff7d"] Oct 14 09:48:39 crc kubenswrapper[5058]: E1014 09:48:39.794428 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b564386b-d09e-432c-aff9-00e4c40f8aac" containerName="neutron-sriov-openstack-openstack-cell2" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.795565 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b564386b-d09e-432c-aff9-00e4c40f8aac" containerName="neutron-sriov-openstack-openstack-cell2" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.795862 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b564386b-d09e-432c-aff9-00e4c40f8aac" containerName="neutron-sriov-openstack-openstack-cell2" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.796729 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.801573 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.802424 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.818786 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell2-bff7d"] Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.857144 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.857236 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.857294 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfndv\" (UniqueName: \"kubernetes.io/projected/17ff4734-b12e-40cd-8e59-38d648ea5e9e-kube-api-access-bfndv\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.857355 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.857993 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.960138 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.960268 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.960312 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.960342 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfndv\" (UniqueName: \"kubernetes.io/projected/17ff4734-b12e-40cd-8e59-38d648ea5e9e-kube-api-access-bfndv\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.960398 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.965277 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.965310 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.967445 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-inventory\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.968665 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:39 crc kubenswrapper[5058]: I1014 09:48:39.981095 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfndv\" (UniqueName: \"kubernetes.io/projected/17ff4734-b12e-40cd-8e59-38d648ea5e9e-kube-api-access-bfndv\") pod \"neutron-dhcp-openstack-openstack-cell2-bff7d\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:40 crc kubenswrapper[5058]: I1014 09:48:40.121488 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:48:40 crc kubenswrapper[5058]: I1014 09:48:40.826315 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell2-bff7d"] Oct 14 09:48:41 crc kubenswrapper[5058]: I1014 09:48:41.725154 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" event={"ID":"17ff4734-b12e-40cd-8e59-38d648ea5e9e","Type":"ContainerStarted","Data":"1687d9cfd142c0de4ece7a9401215cee9c39fae593883da62c9dd7bdfd0eec2e"} Oct 14 09:48:42 crc kubenswrapper[5058]: I1014 09:48:42.739546 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" event={"ID":"17ff4734-b12e-40cd-8e59-38d648ea5e9e","Type":"ContainerStarted","Data":"681f6a37b05943c35cad437342b2dbb46c56513ab6f28aba9db25df6931045cf"} Oct 14 09:48:42 crc kubenswrapper[5058]: I1014 09:48:42.771543 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" podStartSLOduration=3.265274631 podStartE2EDuration="3.771515349s" podCreationTimestamp="2025-10-14 09:48:39 +0000 UTC" firstStartedPulling="2025-10-14 09:48:40.8158743 +0000 UTC m=+10868.726958106" lastFinishedPulling="2025-10-14 09:48:41.322115018 +0000 UTC m=+10869.233198824" observedRunningTime="2025-10-14 09:48:42.760961009 +0000 UTC m=+10870.672044825" watchObservedRunningTime="2025-10-14 09:48:42.771515349 +0000 UTC m=+10870.682599195" Oct 14 09:48:48 crc kubenswrapper[5058]: I1014 09:48:48.791353 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:48:48 crc kubenswrapper[5058]: E1014 09:48:48.792170 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.763172 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c75rw"] Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.766058 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.803175 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zcd9\" (UniqueName: \"kubernetes.io/projected/3c5743dd-7492-4a66-9f46-2b46cec8a239-kube-api-access-9zcd9\") pod \"certified-operators-c75rw\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.803232 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-catalog-content\") pod \"certified-operators-c75rw\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.803575 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-utilities\") pod \"certified-operators-c75rw\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.806185 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c75rw"] Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.905893 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zcd9\" (UniqueName: \"kubernetes.io/projected/3c5743dd-7492-4a66-9f46-2b46cec8a239-kube-api-access-9zcd9\") pod \"certified-operators-c75rw\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.906014 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-catalog-content\") pod \"certified-operators-c75rw\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.906158 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-utilities\") pod \"certified-operators-c75rw\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.907419 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-catalog-content\") pod \"certified-operators-c75rw\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.907700 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-utilities\") pod \"certified-operators-c75rw\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:50 crc kubenswrapper[5058]: I1014 09:48:50.924758 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zcd9\" (UniqueName: \"kubernetes.io/projected/3c5743dd-7492-4a66-9f46-2b46cec8a239-kube-api-access-9zcd9\") pod \"certified-operators-c75rw\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:51 crc kubenswrapper[5058]: I1014 09:48:51.129441 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:48:51 crc kubenswrapper[5058]: W1014 09:48:51.689371 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c5743dd_7492_4a66_9f46_2b46cec8a239.slice/crio-b53ae6a45f6a4bdf73dba76de30320f83846962bd85a9ccdefa068a7b9e659d8 WatchSource:0}: Error finding container b53ae6a45f6a4bdf73dba76de30320f83846962bd85a9ccdefa068a7b9e659d8: Status 404 returned error can't find the container with id b53ae6a45f6a4bdf73dba76de30320f83846962bd85a9ccdefa068a7b9e659d8 Oct 14 09:48:51 crc kubenswrapper[5058]: I1014 09:48:51.695021 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c75rw"] Oct 14 09:48:51 crc kubenswrapper[5058]: I1014 09:48:51.856422 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75rw" event={"ID":"3c5743dd-7492-4a66-9f46-2b46cec8a239","Type":"ContainerStarted","Data":"b53ae6a45f6a4bdf73dba76de30320f83846962bd85a9ccdefa068a7b9e659d8"} Oct 14 09:48:52 crc kubenswrapper[5058]: I1014 09:48:52.871870 5058 generic.go:334] "Generic (PLEG): container finished" podID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerID="395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73" exitCode=0 Oct 14 09:48:52 crc kubenswrapper[5058]: I1014 09:48:52.871941 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75rw" event={"ID":"3c5743dd-7492-4a66-9f46-2b46cec8a239","Type":"ContainerDied","Data":"395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73"} Oct 14 09:48:54 crc kubenswrapper[5058]: I1014 09:48:54.905521 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75rw" event={"ID":"3c5743dd-7492-4a66-9f46-2b46cec8a239","Type":"ContainerDied","Data":"56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461"} Oct 14 09:48:54 crc kubenswrapper[5058]: I1014 09:48:54.905215 5058 generic.go:334] "Generic (PLEG): container finished" podID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerID="56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461" exitCode=0 Oct 14 09:48:55 crc kubenswrapper[5058]: I1014 09:48:55.919887 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75rw" event={"ID":"3c5743dd-7492-4a66-9f46-2b46cec8a239","Type":"ContainerStarted","Data":"8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062"} Oct 14 09:48:55 crc kubenswrapper[5058]: I1014 09:48:55.940151 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c75rw" podStartSLOduration=3.422559933 podStartE2EDuration="5.940132975s" podCreationTimestamp="2025-10-14 09:48:50 +0000 UTC" firstStartedPulling="2025-10-14 09:48:52.87468548 +0000 UTC m=+10880.785769316" lastFinishedPulling="2025-10-14 09:48:55.392258512 +0000 UTC m=+10883.303342358" observedRunningTime="2025-10-14 09:48:55.939878197 +0000 UTC m=+10883.850962033" watchObservedRunningTime="2025-10-14 09:48:55.940132975 +0000 UTC m=+10883.851216791" Oct 14 09:48:56 crc kubenswrapper[5058]: I1014 09:48:56.933583 5058 generic.go:334] "Generic (PLEG): container finished" podID="20a08639-e660-434d-89c5-581137866521" containerID="9fa04c0bcb0ebe400b08676112980f4d3290ce802d6784c2fb29bb09baa7f541" exitCode=0 Oct 14 09:48:56 crc kubenswrapper[5058]: I1014 09:48:56.933679 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" event={"ID":"20a08639-e660-434d-89c5-581137866521","Type":"ContainerDied","Data":"9fa04c0bcb0ebe400b08676112980f4d3290ce802d6784c2fb29bb09baa7f541"} Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.099459 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.198236 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-agent-neutron-config-0\") pod \"20a08639-e660-434d-89c5-581137866521\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.198679 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpfd8\" (UniqueName: \"kubernetes.io/projected/20a08639-e660-434d-89c5-581137866521-kube-api-access-hpfd8\") pod \"20a08639-e660-434d-89c5-581137866521\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.198829 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-combined-ca-bundle\") pod \"20a08639-e660-434d-89c5-581137866521\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.198873 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-ssh-key\") pod \"20a08639-e660-434d-89c5-581137866521\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.198917 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-inventory\") pod \"20a08639-e660-434d-89c5-581137866521\" (UID: \"20a08639-e660-434d-89c5-581137866521\") " Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.204005 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a08639-e660-434d-89c5-581137866521-kube-api-access-hpfd8" (OuterVolumeSpecName: "kube-api-access-hpfd8") pod "20a08639-e660-434d-89c5-581137866521" (UID: "20a08639-e660-434d-89c5-581137866521"). InnerVolumeSpecName "kube-api-access-hpfd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.218920 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "20a08639-e660-434d-89c5-581137866521" (UID: "20a08639-e660-434d-89c5-581137866521"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.227486 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-inventory" (OuterVolumeSpecName: "inventory") pod "20a08639-e660-434d-89c5-581137866521" (UID: "20a08639-e660-434d-89c5-581137866521"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.230180 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "20a08639-e660-434d-89c5-581137866521" (UID: "20a08639-e660-434d-89c5-581137866521"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.231773 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "20a08639-e660-434d-89c5-581137866521" (UID: "20a08639-e660-434d-89c5-581137866521"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.301114 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.301147 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpfd8\" (UniqueName: \"kubernetes.io/projected/20a08639-e660-434d-89c5-581137866521-kube-api-access-hpfd8\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.301158 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.301167 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.301177 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20a08639-e660-434d-89c5-581137866521-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.971510 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" event={"ID":"20a08639-e660-434d-89c5-581137866521","Type":"ContainerDied","Data":"e0a13d5ad51f4e76a46666fc4b50735bc238167251c6a1c2218fd4538a748bc1"} Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.971917 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a13d5ad51f4e76a46666fc4b50735bc238167251c6a1c2218fd4538a748bc1" Oct 14 09:48:59 crc kubenswrapper[5058]: I1014 09:48:59.971682 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4srrf" Oct 14 09:49:00 crc kubenswrapper[5058]: E1014 09:49:00.122383 5058 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20a08639_e660_434d_89c5_581137866521.slice\": RecentStats: unable to find data in memory cache]" Oct 14 09:49:00 crc kubenswrapper[5058]: I1014 09:49:00.790599 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:49:00 crc kubenswrapper[5058]: E1014 09:49:00.791837 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:49:01 crc kubenswrapper[5058]: I1014 09:49:01.130316 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:49:01 crc kubenswrapper[5058]: I1014 09:49:01.130384 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:49:01 crc kubenswrapper[5058]: I1014 09:49:01.201054 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:49:02 crc kubenswrapper[5058]: I1014 09:49:02.070741 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:49:02 crc kubenswrapper[5058]: I1014 09:49:02.148136 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c75rw"] Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.031960 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c75rw" podUID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerName="registry-server" containerID="cri-o://8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062" gracePeriod=2 Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.721918 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.866035 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-catalog-content\") pod \"3c5743dd-7492-4a66-9f46-2b46cec8a239\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.866536 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-utilities\") pod \"3c5743dd-7492-4a66-9f46-2b46cec8a239\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.866602 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zcd9\" (UniqueName: \"kubernetes.io/projected/3c5743dd-7492-4a66-9f46-2b46cec8a239-kube-api-access-9zcd9\") pod \"3c5743dd-7492-4a66-9f46-2b46cec8a239\" (UID: \"3c5743dd-7492-4a66-9f46-2b46cec8a239\") " Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.868268 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-utilities" (OuterVolumeSpecName: "utilities") pod "3c5743dd-7492-4a66-9f46-2b46cec8a239" (UID: "3c5743dd-7492-4a66-9f46-2b46cec8a239"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.884347 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c5743dd-7492-4a66-9f46-2b46cec8a239-kube-api-access-9zcd9" (OuterVolumeSpecName: "kube-api-access-9zcd9") pod "3c5743dd-7492-4a66-9f46-2b46cec8a239" (UID: "3c5743dd-7492-4a66-9f46-2b46cec8a239"). InnerVolumeSpecName "kube-api-access-9zcd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.943901 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c5743dd-7492-4a66-9f46-2b46cec8a239" (UID: "3c5743dd-7492-4a66-9f46-2b46cec8a239"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.970185 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.970243 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5743dd-7492-4a66-9f46-2b46cec8a239-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:49:04 crc kubenswrapper[5058]: I1014 09:49:04.970267 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zcd9\" (UniqueName: \"kubernetes.io/projected/3c5743dd-7492-4a66-9f46-2b46cec8a239-kube-api-access-9zcd9\") on node \"crc\" DevicePath \"\"" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.048551 5058 generic.go:334] "Generic (PLEG): container finished" podID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerID="8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062" exitCode=0 Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.048623 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75rw" event={"ID":"3c5743dd-7492-4a66-9f46-2b46cec8a239","Type":"ContainerDied","Data":"8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062"} Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.048671 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75rw" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.048693 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75rw" event={"ID":"3c5743dd-7492-4a66-9f46-2b46cec8a239","Type":"ContainerDied","Data":"b53ae6a45f6a4bdf73dba76de30320f83846962bd85a9ccdefa068a7b9e659d8"} Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.048721 5058 scope.go:117] "RemoveContainer" containerID="8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.097074 5058 scope.go:117] "RemoveContainer" containerID="56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.101913 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c75rw"] Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.118168 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c75rw"] Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.129655 5058 scope.go:117] "RemoveContainer" containerID="395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.169395 5058 scope.go:117] "RemoveContainer" containerID="8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062" Oct 14 09:49:05 crc kubenswrapper[5058]: E1014 09:49:05.169951 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062\": container with ID starting with 8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062 not found: ID does not exist" containerID="8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.169993 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062"} err="failed to get container status \"8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062\": rpc error: code = NotFound desc = could not find container \"8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062\": container with ID starting with 8850594d21eb0e01dfcd6762d9d94840cc35132cbb1066a10b35a9bc44fb0062 not found: ID does not exist" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.170024 5058 scope.go:117] "RemoveContainer" containerID="56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461" Oct 14 09:49:05 crc kubenswrapper[5058]: E1014 09:49:05.170433 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461\": container with ID starting with 56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461 not found: ID does not exist" containerID="56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.170521 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461"} err="failed to get container status \"56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461\": rpc error: code = NotFound desc = could not find container \"56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461\": container with ID starting with 56cbc85001e19a2c7f3d9cd375f8d6189d11801919e520ac6d2fd6f225b96461 not found: ID does not exist" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.170584 5058 scope.go:117] "RemoveContainer" containerID="395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73" Oct 14 09:49:05 crc kubenswrapper[5058]: E1014 09:49:05.170913 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73\": container with ID starting with 395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73 not found: ID does not exist" containerID="395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73" Oct 14 09:49:05 crc kubenswrapper[5058]: I1014 09:49:05.170936 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73"} err="failed to get container status \"395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73\": rpc error: code = NotFound desc = could not find container \"395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73\": container with ID starting with 395558e92e41929994052e3d841f219a83226c67294ae36bd74fa9ebd43c6a73 not found: ID does not exist" Oct 14 09:49:06 crc kubenswrapper[5058]: I1014 09:49:06.810609 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c5743dd-7492-4a66-9f46-2b46cec8a239" path="/var/lib/kubelet/pods/3c5743dd-7492-4a66-9f46-2b46cec8a239/volumes" Oct 14 09:49:12 crc kubenswrapper[5058]: I1014 09:49:12.805531 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:49:12 crc kubenswrapper[5058]: E1014 09:49:12.806653 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:49:27 crc kubenswrapper[5058]: I1014 09:49:27.790265 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:49:27 crc kubenswrapper[5058]: E1014 09:49:27.791072 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:49:39 crc kubenswrapper[5058]: I1014 09:49:39.790698 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:49:39 crc kubenswrapper[5058]: E1014 09:49:39.793943 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.269615 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q75nl"] Oct 14 09:49:45 crc kubenswrapper[5058]: E1014 09:49:45.270935 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerName="extract-content" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.270961 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerName="extract-content" Oct 14 09:49:45 crc kubenswrapper[5058]: E1014 09:49:45.270991 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerName="registry-server" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.271004 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerName="registry-server" Oct 14 09:49:45 crc kubenswrapper[5058]: E1014 09:49:45.271020 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a08639-e660-434d-89c5-581137866521" containerName="neutron-dhcp-openstack-openstack-cell1" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.271033 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a08639-e660-434d-89c5-581137866521" containerName="neutron-dhcp-openstack-openstack-cell1" Oct 14 09:49:45 crc kubenswrapper[5058]: E1014 09:49:45.271062 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerName="extract-utilities" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.271073 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerName="extract-utilities" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.271528 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a08639-e660-434d-89c5-581137866521" containerName="neutron-dhcp-openstack-openstack-cell1" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.271579 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5743dd-7492-4a66-9f46-2b46cec8a239" containerName="registry-server" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.274496 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.294958 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q75nl"] Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.352730 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-catalog-content\") pod \"community-operators-q75nl\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.352807 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-utilities\") pod \"community-operators-q75nl\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.352930 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfm7h\" (UniqueName: \"kubernetes.io/projected/ba920194-11df-4b28-b423-64fac1a9d1e1-kube-api-access-gfm7h\") pod \"community-operators-q75nl\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.454298 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-catalog-content\") pod \"community-operators-q75nl\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.454590 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-utilities\") pod \"community-operators-q75nl\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.454681 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfm7h\" (UniqueName: \"kubernetes.io/projected/ba920194-11df-4b28-b423-64fac1a9d1e1-kube-api-access-gfm7h\") pod \"community-operators-q75nl\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.455204 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-catalog-content\") pod \"community-operators-q75nl\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.455483 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-utilities\") pod \"community-operators-q75nl\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.474680 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfm7h\" (UniqueName: \"kubernetes.io/projected/ba920194-11df-4b28-b423-64fac1a9d1e1-kube-api-access-gfm7h\") pod \"community-operators-q75nl\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:45 crc kubenswrapper[5058]: I1014 09:49:45.650056 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:46 crc kubenswrapper[5058]: I1014 09:49:46.107817 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q75nl"] Oct 14 09:49:46 crc kubenswrapper[5058]: I1014 09:49:46.633245 5058 generic.go:334] "Generic (PLEG): container finished" podID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerID="9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30" exitCode=0 Oct 14 09:49:46 crc kubenswrapper[5058]: I1014 09:49:46.633340 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q75nl" event={"ID":"ba920194-11df-4b28-b423-64fac1a9d1e1","Type":"ContainerDied","Data":"9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30"} Oct 14 09:49:46 crc kubenswrapper[5058]: I1014 09:49:46.633626 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q75nl" event={"ID":"ba920194-11df-4b28-b423-64fac1a9d1e1","Type":"ContainerStarted","Data":"bc0ee0fa086bc19d541d4d6e6bef22c51e6b1822cdeee8b4cdb616803629cd65"} Oct 14 09:49:46 crc kubenswrapper[5058]: I1014 09:49:46.637527 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:49:47 crc kubenswrapper[5058]: I1014 09:49:47.649982 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q75nl" event={"ID":"ba920194-11df-4b28-b423-64fac1a9d1e1","Type":"ContainerStarted","Data":"775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b"} Oct 14 09:49:48 crc kubenswrapper[5058]: I1014 09:49:48.669107 5058 generic.go:334] "Generic (PLEG): container finished" podID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerID="775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b" exitCode=0 Oct 14 09:49:48 crc kubenswrapper[5058]: I1014 09:49:48.669183 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q75nl" event={"ID":"ba920194-11df-4b28-b423-64fac1a9d1e1","Type":"ContainerDied","Data":"775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b"} Oct 14 09:49:49 crc kubenswrapper[5058]: I1014 09:49:49.680931 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q75nl" event={"ID":"ba920194-11df-4b28-b423-64fac1a9d1e1","Type":"ContainerStarted","Data":"21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910"} Oct 14 09:49:49 crc kubenswrapper[5058]: I1014 09:49:49.717144 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q75nl" podStartSLOduration=2.220428923 podStartE2EDuration="4.717123046s" podCreationTimestamp="2025-10-14 09:49:45 +0000 UTC" firstStartedPulling="2025-10-14 09:49:46.637300623 +0000 UTC m=+10934.548384429" lastFinishedPulling="2025-10-14 09:49:49.133994736 +0000 UTC m=+10937.045078552" observedRunningTime="2025-10-14 09:49:49.700788271 +0000 UTC m=+10937.611872087" watchObservedRunningTime="2025-10-14 09:49:49.717123046 +0000 UTC m=+10937.628206872" Oct 14 09:49:52 crc kubenswrapper[5058]: I1014 09:49:52.806815 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:49:52 crc kubenswrapper[5058]: E1014 09:49:52.807220 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:49:55 crc kubenswrapper[5058]: I1014 09:49:55.650411 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:55 crc kubenswrapper[5058]: I1014 09:49:55.650936 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:55 crc kubenswrapper[5058]: I1014 09:49:55.751668 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:55 crc kubenswrapper[5058]: I1014 09:49:55.821584 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:56 crc kubenswrapper[5058]: I1014 09:49:56.003711 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q75nl"] Oct 14 09:49:57 crc kubenswrapper[5058]: I1014 09:49:57.770064 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q75nl" podUID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerName="registry-server" containerID="cri-o://21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910" gracePeriod=2 Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.364188 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.474684 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfm7h\" (UniqueName: \"kubernetes.io/projected/ba920194-11df-4b28-b423-64fac1a9d1e1-kube-api-access-gfm7h\") pod \"ba920194-11df-4b28-b423-64fac1a9d1e1\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.475068 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-catalog-content\") pod \"ba920194-11df-4b28-b423-64fac1a9d1e1\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.475220 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-utilities\") pod \"ba920194-11df-4b28-b423-64fac1a9d1e1\" (UID: \"ba920194-11df-4b28-b423-64fac1a9d1e1\") " Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.476619 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-utilities" (OuterVolumeSpecName: "utilities") pod "ba920194-11df-4b28-b423-64fac1a9d1e1" (UID: "ba920194-11df-4b28-b423-64fac1a9d1e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.483230 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba920194-11df-4b28-b423-64fac1a9d1e1-kube-api-access-gfm7h" (OuterVolumeSpecName: "kube-api-access-gfm7h") pod "ba920194-11df-4b28-b423-64fac1a9d1e1" (UID: "ba920194-11df-4b28-b423-64fac1a9d1e1"). InnerVolumeSpecName "kube-api-access-gfm7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.540506 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba920194-11df-4b28-b423-64fac1a9d1e1" (UID: "ba920194-11df-4b28-b423-64fac1a9d1e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.577445 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfm7h\" (UniqueName: \"kubernetes.io/projected/ba920194-11df-4b28-b423-64fac1a9d1e1-kube-api-access-gfm7h\") on node \"crc\" DevicePath \"\"" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.577501 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.577520 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba920194-11df-4b28-b423-64fac1a9d1e1-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.783168 5058 generic.go:334] "Generic (PLEG): container finished" podID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerID="21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910" exitCode=0 Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.783250 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q75nl" event={"ID":"ba920194-11df-4b28-b423-64fac1a9d1e1","Type":"ContainerDied","Data":"21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910"} Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.783475 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q75nl" event={"ID":"ba920194-11df-4b28-b423-64fac1a9d1e1","Type":"ContainerDied","Data":"bc0ee0fa086bc19d541d4d6e6bef22c51e6b1822cdeee8b4cdb616803629cd65"} Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.783502 5058 scope.go:117] "RemoveContainer" containerID="21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.783367 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q75nl" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.812713 5058 scope.go:117] "RemoveContainer" containerID="775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.849060 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q75nl"] Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.856112 5058 scope.go:117] "RemoveContainer" containerID="9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.858626 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q75nl"] Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.888750 5058 scope.go:117] "RemoveContainer" containerID="21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910" Oct 14 09:49:58 crc kubenswrapper[5058]: E1014 09:49:58.889427 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910\": container with ID starting with 21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910 not found: ID does not exist" containerID="21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.889478 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910"} err="failed to get container status \"21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910\": rpc error: code = NotFound desc = could not find container \"21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910\": container with ID starting with 21fa111d6fb29a1d93c4eaf55837ac949249b65419337bcbf72c08a5f236a910 not found: ID does not exist" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.889511 5058 scope.go:117] "RemoveContainer" containerID="775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b" Oct 14 09:49:58 crc kubenswrapper[5058]: E1014 09:49:58.889986 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b\": container with ID starting with 775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b not found: ID does not exist" containerID="775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.890111 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b"} err="failed to get container status \"775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b\": rpc error: code = NotFound desc = could not find container \"775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b\": container with ID starting with 775f4bf39f1f3340f494bd9453ecd636f62f01c346afdf5539c7c15ad6013c9b not found: ID does not exist" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.890203 5058 scope.go:117] "RemoveContainer" containerID="9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30" Oct 14 09:49:58 crc kubenswrapper[5058]: E1014 09:49:58.890878 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30\": container with ID starting with 9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30 not found: ID does not exist" containerID="9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30" Oct 14 09:49:58 crc kubenswrapper[5058]: I1014 09:49:58.890942 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30"} err="failed to get container status \"9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30\": rpc error: code = NotFound desc = could not find container \"9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30\": container with ID starting with 9065e49610b046519c38c33b7aff77712198c1cfbac4f3fab2b1e5c2cc802a30 not found: ID does not exist" Oct 14 09:50:00 crc kubenswrapper[5058]: I1014 09:50:00.801736 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba920194-11df-4b28-b423-64fac1a9d1e1" path="/var/lib/kubelet/pods/ba920194-11df-4b28-b423-64fac1a9d1e1/volumes" Oct 14 09:50:06 crc kubenswrapper[5058]: I1014 09:50:06.790234 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:50:06 crc kubenswrapper[5058]: E1014 09:50:06.791050 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:50:20 crc kubenswrapper[5058]: I1014 09:50:20.792175 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:50:20 crc kubenswrapper[5058]: E1014 09:50:20.793554 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:50:33 crc kubenswrapper[5058]: I1014 09:50:33.790627 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:50:35 crc kubenswrapper[5058]: I1014 09:50:35.283592 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"0a2cedfce5628ab51803f44e76505a9365df9ed4bd2f29645beb563cf71925b7"} Oct 14 09:51:13 crc kubenswrapper[5058]: I1014 09:51:13.799831 5058 generic.go:334] "Generic (PLEG): container finished" podID="17ff4734-b12e-40cd-8e59-38d648ea5e9e" containerID="681f6a37b05943c35cad437342b2dbb46c56513ab6f28aba9db25df6931045cf" exitCode=0 Oct 14 09:51:13 crc kubenswrapper[5058]: I1014 09:51:13.799941 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" event={"ID":"17ff4734-b12e-40cd-8e59-38d648ea5e9e","Type":"ContainerDied","Data":"681f6a37b05943c35cad437342b2dbb46c56513ab6f28aba9db25df6931045cf"} Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.372377 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.415441 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfndv\" (UniqueName: \"kubernetes.io/projected/17ff4734-b12e-40cd-8e59-38d648ea5e9e-kube-api-access-bfndv\") pod \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.415702 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-combined-ca-bundle\") pod \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.416102 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-ssh-key\") pod \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.416177 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-agent-neutron-config-0\") pod \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.416328 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-inventory\") pod \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\" (UID: \"17ff4734-b12e-40cd-8e59-38d648ea5e9e\") " Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.425014 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "17ff4734-b12e-40cd-8e59-38d648ea5e9e" (UID: "17ff4734-b12e-40cd-8e59-38d648ea5e9e"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.425539 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ff4734-b12e-40cd-8e59-38d648ea5e9e-kube-api-access-bfndv" (OuterVolumeSpecName: "kube-api-access-bfndv") pod "17ff4734-b12e-40cd-8e59-38d648ea5e9e" (UID: "17ff4734-b12e-40cd-8e59-38d648ea5e9e"). InnerVolumeSpecName "kube-api-access-bfndv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.457320 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-inventory" (OuterVolumeSpecName: "inventory") pod "17ff4734-b12e-40cd-8e59-38d648ea5e9e" (UID: "17ff4734-b12e-40cd-8e59-38d648ea5e9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.458506 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "17ff4734-b12e-40cd-8e59-38d648ea5e9e" (UID: "17ff4734-b12e-40cd-8e59-38d648ea5e9e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.473019 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "17ff4734-b12e-40cd-8e59-38d648ea5e9e" (UID: "17ff4734-b12e-40cd-8e59-38d648ea5e9e"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.521195 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.521245 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.521274 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.521302 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfndv\" (UniqueName: \"kubernetes.io/projected/17ff4734-b12e-40cd-8e59-38d648ea5e9e-kube-api-access-bfndv\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.521338 5058 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ff4734-b12e-40cd-8e59-38d648ea5e9e-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.836287 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" event={"ID":"17ff4734-b12e-40cd-8e59-38d648ea5e9e","Type":"ContainerDied","Data":"1687d9cfd142c0de4ece7a9401215cee9c39fae593883da62c9dd7bdfd0eec2e"} Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.836642 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1687d9cfd142c0de4ece7a9401215cee9c39fae593883da62c9dd7bdfd0eec2e" Oct 14 09:51:15 crc kubenswrapper[5058]: I1014 09:51:15.836734 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell2-bff7d" Oct 14 09:51:36 crc kubenswrapper[5058]: I1014 09:51:36.629436 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 09:51:36 crc kubenswrapper[5058]: I1014 09:51:36.630059 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="2f0dc5f2-ff52-4c09-97f7-be156e1d0847" containerName="nova-cell0-conductor-conductor" containerID="cri-o://04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55" gracePeriod=30 Oct 14 09:51:36 crc kubenswrapper[5058]: I1014 09:51:36.647158 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 09:51:36 crc kubenswrapper[5058]: I1014 09:51:36.647367 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="68be61db-e0a0-4286-80c3-b0ffeb8498d4" containerName="nova-cell1-conductor-conductor" containerID="cri-o://5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117" gracePeriod=30 Oct 14 09:51:37 crc kubenswrapper[5058]: E1014 09:51:37.245625 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117 is running failed: container process not found" containerID="5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 09:51:37 crc kubenswrapper[5058]: E1014 09:51:37.246450 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117 is running failed: container process not found" containerID="5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 09:51:37 crc kubenswrapper[5058]: E1014 09:51:37.246951 5058 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117 is running failed: container process not found" containerID="5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 14 09:51:37 crc kubenswrapper[5058]: E1014 09:51:37.247043 5058 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="68be61db-e0a0-4286-80c3-b0ffeb8498d4" containerName="nova-cell1-conductor-conductor" Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.448651 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell2-conductor-0"] Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.449132 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell2-conductor-0" podUID="3a70199b-22be-47f0-b8b1-75b894258c1a" containerName="nova-cell2-conductor-conductor" containerID="cri-o://38bd7e9f5f204dca7ed11cec45a145977118f5eb5309800e690b1916e8bdf20b" gracePeriod=30 Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.619858 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.674252 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-combined-ca-bundle\") pod \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.674414 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm249\" (UniqueName: \"kubernetes.io/projected/68be61db-e0a0-4286-80c3-b0ffeb8498d4-kube-api-access-zm249\") pod \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.674469 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-config-data\") pod \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\" (UID: \"68be61db-e0a0-4286-80c3-b0ffeb8498d4\") " Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.691341 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68be61db-e0a0-4286-80c3-b0ffeb8498d4-kube-api-access-zm249" (OuterVolumeSpecName: "kube-api-access-zm249") pod "68be61db-e0a0-4286-80c3-b0ffeb8498d4" (UID: "68be61db-e0a0-4286-80c3-b0ffeb8498d4"). InnerVolumeSpecName "kube-api-access-zm249". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.725080 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-config-data" (OuterVolumeSpecName: "config-data") pod "68be61db-e0a0-4286-80c3-b0ffeb8498d4" (UID: "68be61db-e0a0-4286-80c3-b0ffeb8498d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.730516 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68be61db-e0a0-4286-80c3-b0ffeb8498d4" (UID: "68be61db-e0a0-4286-80c3-b0ffeb8498d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.778150 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm249\" (UniqueName: \"kubernetes.io/projected/68be61db-e0a0-4286-80c3-b0ffeb8498d4-kube-api-access-zm249\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.778183 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:37 crc kubenswrapper[5058]: I1014 09:51:37.778194 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68be61db-e0a0-4286-80c3-b0ffeb8498d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.156046 5058 generic.go:334] "Generic (PLEG): container finished" podID="68be61db-e0a0-4286-80c3-b0ffeb8498d4" containerID="5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117" exitCode=0 Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.156087 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"68be61db-e0a0-4286-80c3-b0ffeb8498d4","Type":"ContainerDied","Data":"5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117"} Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.156111 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"68be61db-e0a0-4286-80c3-b0ffeb8498d4","Type":"ContainerDied","Data":"71ffd59e1279f5ef9e08ecff962cb6e4f7684ba85269d6f3ced8ef0b47d66780"} Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.156127 5058 scope.go:117] "RemoveContainer" containerID="5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.156230 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.231971 5058 scope.go:117] "RemoveContainer" containerID="5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.232062 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 09:51:38 crc kubenswrapper[5058]: E1014 09:51:38.233545 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117\": container with ID starting with 5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117 not found: ID does not exist" containerID="5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.233569 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117"} err="failed to get container status \"5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117\": rpc error: code = NotFound desc = could not find container \"5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117\": container with ID starting with 5c7be171553ce931f23bf63293186560bb5c543aebd10dd935fc0caf8cd8b117 not found: ID does not exist" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.266485 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.281962 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 09:51:38 crc kubenswrapper[5058]: E1014 09:51:38.282508 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ff4734-b12e-40cd-8e59-38d648ea5e9e" containerName="neutron-dhcp-openstack-openstack-cell2" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.282529 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ff4734-b12e-40cd-8e59-38d648ea5e9e" containerName="neutron-dhcp-openstack-openstack-cell2" Oct 14 09:51:38 crc kubenswrapper[5058]: E1014 09:51:38.282577 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerName="extract-utilities" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.282586 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerName="extract-utilities" Oct 14 09:51:38 crc kubenswrapper[5058]: E1014 09:51:38.282609 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68be61db-e0a0-4286-80c3-b0ffeb8498d4" containerName="nova-cell1-conductor-conductor" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.282617 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="68be61db-e0a0-4286-80c3-b0ffeb8498d4" containerName="nova-cell1-conductor-conductor" Oct 14 09:51:38 crc kubenswrapper[5058]: E1014 09:51:38.282631 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerName="extract-content" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.282639 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerName="extract-content" Oct 14 09:51:38 crc kubenswrapper[5058]: E1014 09:51:38.282665 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerName="registry-server" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.282673 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerName="registry-server" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.282928 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="68be61db-e0a0-4286-80c3-b0ffeb8498d4" containerName="nova-cell1-conductor-conductor" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.282950 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba920194-11df-4b28-b423-64fac1a9d1e1" containerName="registry-server" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.282968 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ff4734-b12e-40cd-8e59-38d648ea5e9e" containerName="neutron-dhcp-openstack-openstack-cell2" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.283894 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.286304 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.297227 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.297333 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.297412 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmljp\" (UniqueName: \"kubernetes.io/projected/ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8-kube-api-access-nmljp\") pod \"nova-cell1-conductor-0\" (UID: \"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.309929 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.347301 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell3-conductor-0"] Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.347515 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell3-conductor-0" podUID="82d7fde3-d8a9-40c9-a83c-4003d297615e" containerName="nova-cell3-conductor-conductor" containerID="cri-o://b0260427a1bb46b0a0e2780827b516abeb2abb961f9f6a656ac31cb58c58d269" gracePeriod=30 Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.400986 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.401109 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmljp\" (UniqueName: \"kubernetes.io/projected/ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8-kube-api-access-nmljp\") pod \"nova-cell1-conductor-0\" (UID: \"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.401169 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.423718 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.424422 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.427408 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmljp\" (UniqueName: \"kubernetes.io/projected/ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8-kube-api-access-nmljp\") pod \"nova-cell1-conductor-0\" (UID: \"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8\") " pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.527357 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.527570 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b60f1acf-e39d-4169-ab16-0c77fa2d92ef" containerName="nova-scheduler-scheduler" containerID="cri-o://3624b3b60feee0d575d3fe2990941128a77701a291e7296e3f6b3b67a7283690" gracePeriod=30 Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.560099 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.560403 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-log" containerID="cri-o://fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696" gracePeriod=30 Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.560581 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-api" containerID="cri-o://256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7" gracePeriod=30 Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.577320 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.577710 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-metadata" containerID="cri-o://19b2175ddcb06631577fe677e30dab23bd6bfde38844f7815ad521860f9ec564" gracePeriod=30 Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.577582 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-log" containerID="cri-o://1b7cabe79a9fb0aa7148bbfcda03eb7a9f05811c4c6ff8244605bd4bac2b1a8b" gracePeriod=30 Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.599404 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:38 crc kubenswrapper[5058]: I1014 09:51:38.808286 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68be61db-e0a0-4286-80c3-b0ffeb8498d4" path="/var/lib/kubelet/pods/68be61db-e0a0-4286-80c3-b0ffeb8498d4/volumes" Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.132657 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 09:51:39 crc kubenswrapper[5058]: W1014 09:51:39.133933 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec35f4ba_510f_4665_8f2c_3ed0bb2b91d8.slice/crio-751147ca1d00a5bd273c503bd5a9e52942399cfd11743016c40772ae51b88ee7 WatchSource:0}: Error finding container 751147ca1d00a5bd273c503bd5a9e52942399cfd11743016c40772ae51b88ee7: Status 404 returned error can't find the container with id 751147ca1d00a5bd273c503bd5a9e52942399cfd11743016c40772ae51b88ee7 Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.175078 5058 generic.go:334] "Generic (PLEG): container finished" podID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerID="1b7cabe79a9fb0aa7148bbfcda03eb7a9f05811c4c6ff8244605bd4bac2b1a8b" exitCode=143 Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.175173 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94a23eed-d2e1-422f-acbc-40af4cab721c","Type":"ContainerDied","Data":"1b7cabe79a9fb0aa7148bbfcda03eb7a9f05811c4c6ff8244605bd4bac2b1a8b"} Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.179126 5058 generic.go:334] "Generic (PLEG): container finished" podID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerID="fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696" exitCode=143 Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.179194 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7","Type":"ContainerDied","Data":"fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696"} Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.181918 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8","Type":"ContainerStarted","Data":"751147ca1d00a5bd273c503bd5a9e52942399cfd11743016c40772ae51b88ee7"} Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.624270 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.731512 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-combined-ca-bundle\") pod \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.731960 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkdgd\" (UniqueName: \"kubernetes.io/projected/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-kube-api-access-dkdgd\") pod \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.732140 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-config-data\") pod \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\" (UID: \"2f0dc5f2-ff52-4c09-97f7-be156e1d0847\") " Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.742011 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-kube-api-access-dkdgd" (OuterVolumeSpecName: "kube-api-access-dkdgd") pod "2f0dc5f2-ff52-4c09-97f7-be156e1d0847" (UID: "2f0dc5f2-ff52-4c09-97f7-be156e1d0847"). InnerVolumeSpecName "kube-api-access-dkdgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.777184 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f0dc5f2-ff52-4c09-97f7-be156e1d0847" (UID: "2f0dc5f2-ff52-4c09-97f7-be156e1d0847"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.787677 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-config-data" (OuterVolumeSpecName: "config-data") pod "2f0dc5f2-ff52-4c09-97f7-be156e1d0847" (UID: "2f0dc5f2-ff52-4c09-97f7-be156e1d0847"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.835131 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.836248 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkdgd\" (UniqueName: \"kubernetes.io/projected/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-kube-api-access-dkdgd\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:39 crc kubenswrapper[5058]: I1014 09:51:39.836338 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0dc5f2-ff52-4c09-97f7-be156e1d0847-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.208056 5058 generic.go:334] "Generic (PLEG): container finished" podID="b60f1acf-e39d-4169-ab16-0c77fa2d92ef" containerID="3624b3b60feee0d575d3fe2990941128a77701a291e7296e3f6b3b67a7283690" exitCode=0 Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.208117 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b60f1acf-e39d-4169-ab16-0c77fa2d92ef","Type":"ContainerDied","Data":"3624b3b60feee0d575d3fe2990941128a77701a291e7296e3f6b3b67a7283690"} Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.211165 5058 generic.go:334] "Generic (PLEG): container finished" podID="3a70199b-22be-47f0-b8b1-75b894258c1a" containerID="38bd7e9f5f204dca7ed11cec45a145977118f5eb5309800e690b1916e8bdf20b" exitCode=0 Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.211216 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-0" event={"ID":"3a70199b-22be-47f0-b8b1-75b894258c1a","Type":"ContainerDied","Data":"38bd7e9f5f204dca7ed11cec45a145977118f5eb5309800e690b1916e8bdf20b"} Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.211236 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-0" event={"ID":"3a70199b-22be-47f0-b8b1-75b894258c1a","Type":"ContainerDied","Data":"7eebf5b7521cccfd8bcb773dbfa7041487c10ee7203fed79c6b35102c8cf2bf8"} Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.211246 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eebf5b7521cccfd8bcb773dbfa7041487c10ee7203fed79c6b35102c8cf2bf8" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.212333 5058 generic.go:334] "Generic (PLEG): container finished" podID="2f0dc5f2-ff52-4c09-97f7-be156e1d0847" containerID="04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55" exitCode=0 Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.212371 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f0dc5f2-ff52-4c09-97f7-be156e1d0847","Type":"ContainerDied","Data":"04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55"} Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.212393 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f0dc5f2-ff52-4c09-97f7-be156e1d0847","Type":"ContainerDied","Data":"90911c806300bc035fef057ce93b59d5b38c6cbc86435a3eae5be886133b264d"} Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.212409 5058 scope.go:117] "RemoveContainer" containerID="04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.212521 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.217444 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8","Type":"ContainerStarted","Data":"e47892356cea03ace46dad2df473e348b9c7dde09a8bb96e2b5cf1b93f41513a"} Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.218162 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.221456 5058 generic.go:334] "Generic (PLEG): container finished" podID="82d7fde3-d8a9-40c9-a83c-4003d297615e" containerID="b0260427a1bb46b0a0e2780827b516abeb2abb961f9f6a656ac31cb58c58d269" exitCode=0 Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.221484 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-0" event={"ID":"82d7fde3-d8a9-40c9-a83c-4003d297615e","Type":"ContainerDied","Data":"b0260427a1bb46b0a0e2780827b516abeb2abb961f9f6a656ac31cb58c58d269"} Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.259961 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.259946075 podStartE2EDuration="2.259946075s" podCreationTimestamp="2025-10-14 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:51:40.255157709 +0000 UTC m=+11048.166241515" watchObservedRunningTime="2025-10-14 09:51:40.259946075 +0000 UTC m=+11048.171029881" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.284137 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.308927 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.311909 5058 scope.go:117] "RemoveContainer" containerID="04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55" Oct 14 09:51:40 crc kubenswrapper[5058]: E1014 09:51:40.313674 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55\": container with ID starting with 04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55 not found: ID does not exist" containerID="04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.313711 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55"} err="failed to get container status \"04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55\": rpc error: code = NotFound desc = could not find container \"04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55\": container with ID starting with 04b4185bdcb893f7526fa1b1c399b94275841d9f9007765aad3fc1c2c2d02c55 not found: ID does not exist" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.339133 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.346157 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv5lj\" (UniqueName: \"kubernetes.io/projected/3a70199b-22be-47f0-b8b1-75b894258c1a-kube-api-access-rv5lj\") pod \"3a70199b-22be-47f0-b8b1-75b894258c1a\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.346318 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-combined-ca-bundle\") pod \"3a70199b-22be-47f0-b8b1-75b894258c1a\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.346758 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-config-data\") pod \"3a70199b-22be-47f0-b8b1-75b894258c1a\" (UID: \"3a70199b-22be-47f0-b8b1-75b894258c1a\") " Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.351038 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a70199b-22be-47f0-b8b1-75b894258c1a-kube-api-access-rv5lj" (OuterVolumeSpecName: "kube-api-access-rv5lj") pod "3a70199b-22be-47f0-b8b1-75b894258c1a" (UID: "3a70199b-22be-47f0-b8b1-75b894258c1a"). InnerVolumeSpecName "kube-api-access-rv5lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.355611 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 09:51:40 crc kubenswrapper[5058]: E1014 09:51:40.356118 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a70199b-22be-47f0-b8b1-75b894258c1a" containerName="nova-cell2-conductor-conductor" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.356137 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a70199b-22be-47f0-b8b1-75b894258c1a" containerName="nova-cell2-conductor-conductor" Oct 14 09:51:40 crc kubenswrapper[5058]: E1014 09:51:40.356160 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0dc5f2-ff52-4c09-97f7-be156e1d0847" containerName="nova-cell0-conductor-conductor" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.356167 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0dc5f2-ff52-4c09-97f7-be156e1d0847" containerName="nova-cell0-conductor-conductor" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.356351 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0dc5f2-ff52-4c09-97f7-be156e1d0847" containerName="nova-cell0-conductor-conductor" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.356368 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a70199b-22be-47f0-b8b1-75b894258c1a" containerName="nova-cell2-conductor-conductor" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.357192 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.359138 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.380653 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a70199b-22be-47f0-b8b1-75b894258c1a" (UID: "3a70199b-22be-47f0-b8b1-75b894258c1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.392424 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.402571 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-config-data" (OuterVolumeSpecName: "config-data") pod "3a70199b-22be-47f0-b8b1-75b894258c1a" (UID: "3a70199b-22be-47f0-b8b1-75b894258c1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.449715 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2hg\" (UniqueName: \"kubernetes.io/projected/283127b1-9cc1-47f1-80a2-551566f93544-kube-api-access-pj2hg\") pod \"nova-cell0-conductor-0\" (UID: \"283127b1-9cc1-47f1-80a2-551566f93544\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.449807 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283127b1-9cc1-47f1-80a2-551566f93544-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"283127b1-9cc1-47f1-80a2-551566f93544\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.449891 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283127b1-9cc1-47f1-80a2-551566f93544-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"283127b1-9cc1-47f1-80a2-551566f93544\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.449986 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.450004 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv5lj\" (UniqueName: \"kubernetes.io/projected/3a70199b-22be-47f0-b8b1-75b894258c1a-kube-api-access-rv5lj\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.450013 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a70199b-22be-47f0-b8b1-75b894258c1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.461087 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.478750 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.551134 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-combined-ca-bundle\") pod \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.551211 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54wp7\" (UniqueName: \"kubernetes.io/projected/82d7fde3-d8a9-40c9-a83c-4003d297615e-kube-api-access-54wp7\") pod \"82d7fde3-d8a9-40c9-a83c-4003d297615e\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.551250 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-combined-ca-bundle\") pod \"82d7fde3-d8a9-40c9-a83c-4003d297615e\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.551373 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59ssl\" (UniqueName: \"kubernetes.io/projected/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-kube-api-access-59ssl\") pod \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.551463 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-config-data\") pod \"82d7fde3-d8a9-40c9-a83c-4003d297615e\" (UID: \"82d7fde3-d8a9-40c9-a83c-4003d297615e\") " Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.551574 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-config-data\") pod \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\" (UID: \"b60f1acf-e39d-4169-ab16-0c77fa2d92ef\") " Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.551992 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj2hg\" (UniqueName: \"kubernetes.io/projected/283127b1-9cc1-47f1-80a2-551566f93544-kube-api-access-pj2hg\") pod \"nova-cell0-conductor-0\" (UID: \"283127b1-9cc1-47f1-80a2-551566f93544\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.552068 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283127b1-9cc1-47f1-80a2-551566f93544-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"283127b1-9cc1-47f1-80a2-551566f93544\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.552178 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283127b1-9cc1-47f1-80a2-551566f93544-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"283127b1-9cc1-47f1-80a2-551566f93544\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.554874 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d7fde3-d8a9-40c9-a83c-4003d297615e-kube-api-access-54wp7" (OuterVolumeSpecName: "kube-api-access-54wp7") pod "82d7fde3-d8a9-40c9-a83c-4003d297615e" (UID: "82d7fde3-d8a9-40c9-a83c-4003d297615e"). InnerVolumeSpecName "kube-api-access-54wp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.555734 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-kube-api-access-59ssl" (OuterVolumeSpecName: "kube-api-access-59ssl") pod "b60f1acf-e39d-4169-ab16-0c77fa2d92ef" (UID: "b60f1acf-e39d-4169-ab16-0c77fa2d92ef"). InnerVolumeSpecName "kube-api-access-59ssl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.557061 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283127b1-9cc1-47f1-80a2-551566f93544-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"283127b1-9cc1-47f1-80a2-551566f93544\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.564778 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283127b1-9cc1-47f1-80a2-551566f93544-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"283127b1-9cc1-47f1-80a2-551566f93544\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.571258 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj2hg\" (UniqueName: \"kubernetes.io/projected/283127b1-9cc1-47f1-80a2-551566f93544-kube-api-access-pj2hg\") pod \"nova-cell0-conductor-0\" (UID: \"283127b1-9cc1-47f1-80a2-551566f93544\") " pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.581977 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-config-data" (OuterVolumeSpecName: "config-data") pod "b60f1acf-e39d-4169-ab16-0c77fa2d92ef" (UID: "b60f1acf-e39d-4169-ab16-0c77fa2d92ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.585030 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b60f1acf-e39d-4169-ab16-0c77fa2d92ef" (UID: "b60f1acf-e39d-4169-ab16-0c77fa2d92ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.585542 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82d7fde3-d8a9-40c9-a83c-4003d297615e" (UID: "82d7fde3-d8a9-40c9-a83c-4003d297615e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.585895 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-config-data" (OuterVolumeSpecName: "config-data") pod "82d7fde3-d8a9-40c9-a83c-4003d297615e" (UID: "82d7fde3-d8a9-40c9-a83c-4003d297615e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.654597 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59ssl\" (UniqueName: \"kubernetes.io/projected/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-kube-api-access-59ssl\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.654633 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.654642 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.654649 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60f1acf-e39d-4169-ab16-0c77fa2d92ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.654659 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54wp7\" (UniqueName: \"kubernetes.io/projected/82d7fde3-d8a9-40c9-a83c-4003d297615e-kube-api-access-54wp7\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.654667 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d7fde3-d8a9-40c9-a83c-4003d297615e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.775827 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:40 crc kubenswrapper[5058]: I1014 09:51:40.809281 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f0dc5f2-ff52-4c09-97f7-be156e1d0847" path="/var/lib/kubelet/pods/2f0dc5f2-ff52-4c09-97f7-be156e1d0847/volumes" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.236919 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-0" event={"ID":"82d7fde3-d8a9-40c9-a83c-4003d297615e","Type":"ContainerDied","Data":"c366e03ef632cec7cfc071c6caa67ffc06b60df4c35a33291a98817d0bdd7c4a"} Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.236957 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.236974 5058 scope.go:117] "RemoveContainer" containerID="b0260427a1bb46b0a0e2780827b516abeb2abb961f9f6a656ac31cb58c58d269" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.241578 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b60f1acf-e39d-4169-ab16-0c77fa2d92ef","Type":"ContainerDied","Data":"cd725518d948fa42762a8d46482442d274e8e384416dbdaee4d785cfbaa9afe9"} Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.241779 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.242788 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.268950 5058 scope.go:117] "RemoveContainer" containerID="3624b3b60feee0d575d3fe2990941128a77701a291e7296e3f6b3b67a7283690" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.291708 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell3-conductor-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.310058 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell3-conductor-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.323729 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell2-conductor-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.343845 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell2-conductor-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.396990 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell3-conductor-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: E1014 09:51:41.397836 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60f1acf-e39d-4169-ab16-0c77fa2d92ef" containerName="nova-scheduler-scheduler" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.397858 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60f1acf-e39d-4169-ab16-0c77fa2d92ef" containerName="nova-scheduler-scheduler" Oct 14 09:51:41 crc kubenswrapper[5058]: E1014 09:51:41.397907 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d7fde3-d8a9-40c9-a83c-4003d297615e" containerName="nova-cell3-conductor-conductor" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.397914 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d7fde3-d8a9-40c9-a83c-4003d297615e" containerName="nova-cell3-conductor-conductor" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.398116 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60f1acf-e39d-4169-ab16-0c77fa2d92ef" containerName="nova-scheduler-scheduler" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.398145 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d7fde3-d8a9-40c9-a83c-4003d297615e" containerName="nova-cell3-conductor-conductor" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.398925 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.409616 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell3-conductor-config-data" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.409855 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-conductor-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.432696 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.445312 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.467880 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell2-conductor-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.469736 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.473035 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-conductor-config-data" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.474506 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01281f7e-5368-4ab7-be69-fcfed8f7a207-config-data\") pod \"nova-cell3-conductor-0\" (UID: \"01281f7e-5368-4ab7-be69-fcfed8f7a207\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.474560 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01281f7e-5368-4ab7-be69-fcfed8f7a207-combined-ca-bundle\") pod \"nova-cell3-conductor-0\" (UID: \"01281f7e-5368-4ab7-be69-fcfed8f7a207\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.474583 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbcb\" (UniqueName: \"kubernetes.io/projected/01281f7e-5368-4ab7-be69-fcfed8f7a207-kube-api-access-rfbcb\") pod \"nova-cell3-conductor-0\" (UID: \"01281f7e-5368-4ab7-be69-fcfed8f7a207\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.488292 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-conductor-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.501533 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.503500 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.505311 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.517882 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.531060 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.575465 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07126c4-2f3b-4001-8e77-5a7555836354-config-data\") pod \"nova-scheduler-0\" (UID: \"e07126c4-2f3b-4001-8e77-5a7555836354\") " pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.575544 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01281f7e-5368-4ab7-be69-fcfed8f7a207-config-data\") pod \"nova-cell3-conductor-0\" (UID: \"01281f7e-5368-4ab7-be69-fcfed8f7a207\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.575563 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07126c4-2f3b-4001-8e77-5a7555836354-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e07126c4-2f3b-4001-8e77-5a7555836354\") " pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.575584 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bf9d98-9166-4c93-adfb-f327f12f6d09-combined-ca-bundle\") pod \"nova-cell2-conductor-0\" (UID: \"78bf9d98-9166-4c93-adfb-f327f12f6d09\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.575624 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01281f7e-5368-4ab7-be69-fcfed8f7a207-combined-ca-bundle\") pod \"nova-cell3-conductor-0\" (UID: \"01281f7e-5368-4ab7-be69-fcfed8f7a207\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.575645 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbcb\" (UniqueName: \"kubernetes.io/projected/01281f7e-5368-4ab7-be69-fcfed8f7a207-kube-api-access-rfbcb\") pod \"nova-cell3-conductor-0\" (UID: \"01281f7e-5368-4ab7-be69-fcfed8f7a207\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.575674 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbcc\" (UniqueName: \"kubernetes.io/projected/78bf9d98-9166-4c93-adfb-f327f12f6d09-kube-api-access-5qbcc\") pod \"nova-cell2-conductor-0\" (UID: \"78bf9d98-9166-4c93-adfb-f327f12f6d09\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.575733 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjsf6\" (UniqueName: \"kubernetes.io/projected/e07126c4-2f3b-4001-8e77-5a7555836354-kube-api-access-fjsf6\") pod \"nova-scheduler-0\" (UID: \"e07126c4-2f3b-4001-8e77-5a7555836354\") " pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.575793 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bf9d98-9166-4c93-adfb-f327f12f6d09-config-data\") pod \"nova-cell2-conductor-0\" (UID: \"78bf9d98-9166-4c93-adfb-f327f12f6d09\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.580395 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01281f7e-5368-4ab7-be69-fcfed8f7a207-combined-ca-bundle\") pod \"nova-cell3-conductor-0\" (UID: \"01281f7e-5368-4ab7-be69-fcfed8f7a207\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.580508 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01281f7e-5368-4ab7-be69-fcfed8f7a207-config-data\") pod \"nova-cell3-conductor-0\" (UID: \"01281f7e-5368-4ab7-be69-fcfed8f7a207\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.594646 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbcb\" (UniqueName: \"kubernetes.io/projected/01281f7e-5368-4ab7-be69-fcfed8f7a207-kube-api-access-rfbcb\") pod \"nova-cell3-conductor-0\" (UID: \"01281f7e-5368-4ab7-be69-fcfed8f7a207\") " pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.677664 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bf9d98-9166-4c93-adfb-f327f12f6d09-config-data\") pod \"nova-cell2-conductor-0\" (UID: \"78bf9d98-9166-4c93-adfb-f327f12f6d09\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.677738 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07126c4-2f3b-4001-8e77-5a7555836354-config-data\") pod \"nova-scheduler-0\" (UID: \"e07126c4-2f3b-4001-8e77-5a7555836354\") " pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.677839 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07126c4-2f3b-4001-8e77-5a7555836354-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e07126c4-2f3b-4001-8e77-5a7555836354\") " pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.677867 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bf9d98-9166-4c93-adfb-f327f12f6d09-combined-ca-bundle\") pod \"nova-cell2-conductor-0\" (UID: \"78bf9d98-9166-4c93-adfb-f327f12f6d09\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.677919 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbcc\" (UniqueName: \"kubernetes.io/projected/78bf9d98-9166-4c93-adfb-f327f12f6d09-kube-api-access-5qbcc\") pod \"nova-cell2-conductor-0\" (UID: \"78bf9d98-9166-4c93-adfb-f327f12f6d09\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.677940 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjsf6\" (UniqueName: \"kubernetes.io/projected/e07126c4-2f3b-4001-8e77-5a7555836354-kube-api-access-fjsf6\") pod \"nova-scheduler-0\" (UID: \"e07126c4-2f3b-4001-8e77-5a7555836354\") " pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.681459 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07126c4-2f3b-4001-8e77-5a7555836354-config-data\") pod \"nova-scheduler-0\" (UID: \"e07126c4-2f3b-4001-8e77-5a7555836354\") " pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.685351 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bf9d98-9166-4c93-adfb-f327f12f6d09-combined-ca-bundle\") pod \"nova-cell2-conductor-0\" (UID: \"78bf9d98-9166-4c93-adfb-f327f12f6d09\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.685888 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07126c4-2f3b-4001-8e77-5a7555836354-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e07126c4-2f3b-4001-8e77-5a7555836354\") " pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.690562 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bf9d98-9166-4c93-adfb-f327f12f6d09-config-data\") pod \"nova-cell2-conductor-0\" (UID: \"78bf9d98-9166-4c93-adfb-f327f12f6d09\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.696698 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjsf6\" (UniqueName: \"kubernetes.io/projected/e07126c4-2f3b-4001-8e77-5a7555836354-kube-api-access-fjsf6\") pod \"nova-scheduler-0\" (UID: \"e07126c4-2f3b-4001-8e77-5a7555836354\") " pod="openstack/nova-scheduler-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.712544 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbcc\" (UniqueName: \"kubernetes.io/projected/78bf9d98-9166-4c93-adfb-f327f12f6d09-kube-api-access-5qbcc\") pod \"nova-cell2-conductor-0\" (UID: \"78bf9d98-9166-4c93-adfb-f327f12f6d09\") " pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.779129 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.135:8775/\": read tcp 10.217.0.2:56006->10.217.1.135:8775: read: connection reset by peer" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.779228 5058 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.135:8775/\": read tcp 10.217.0.2:56020->10.217.1.135:8775: read: connection reset by peer" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.805039 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.833330 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:41 crc kubenswrapper[5058]: I1014 09:51:41.852670 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.292336 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.294766 5058 generic.go:334] "Generic (PLEG): container finished" podID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerID="19b2175ddcb06631577fe677e30dab23bd6bfde38844f7815ad521860f9ec564" exitCode=0 Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.294879 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94a23eed-d2e1-422f-acbc-40af4cab721c","Type":"ContainerDied","Data":"19b2175ddcb06631577fe677e30dab23bd6bfde38844f7815ad521860f9ec564"} Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.296490 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"283127b1-9cc1-47f1-80a2-551566f93544","Type":"ContainerStarted","Data":"4cd5b9cdd3ae2f08190aebbaeaec121546af02fa97c7cbdec754852f2d66122a"} Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.296516 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"283127b1-9cc1-47f1-80a2-551566f93544","Type":"ContainerStarted","Data":"0eb7b545be3993a540a91191a83c0b1e0d440f0b4881107e668cee8874e04dd7"} Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.297341 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.325866 5058 generic.go:334] "Generic (PLEG): container finished" podID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerID="256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7" exitCode=0 Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.325909 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7","Type":"ContainerDied","Data":"256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7"} Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.325935 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7","Type":"ContainerDied","Data":"5cde77956f03e26f2102aa475390075a6ba6294ecbcd5825a67c020df27673d6"} Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.325950 5058 scope.go:117] "RemoveContainer" containerID="256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.326112 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.345930 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.345912702 podStartE2EDuration="2.345912702s" podCreationTimestamp="2025-10-14 09:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:51:42.333215371 +0000 UTC m=+11050.244299177" watchObservedRunningTime="2025-10-14 09:51:42.345912702 +0000 UTC m=+11050.256996508" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.453086 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.453835 5058 scope.go:117] "RemoveContainer" containerID="fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.476309 5058 scope.go:117] "RemoveContainer" containerID="256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7" Oct 14 09:51:42 crc kubenswrapper[5058]: E1014 09:51:42.486820 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7\": container with ID starting with 256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7 not found: ID does not exist" containerID="256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.486868 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7"} err="failed to get container status \"256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7\": rpc error: code = NotFound desc = could not find container \"256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7\": container with ID starting with 256afe36ac11818852705cdda9a172b2c5e55e6d1084b7dea2ff6dbcf7d8a1d7 not found: ID does not exist" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.486898 5058 scope.go:117] "RemoveContainer" containerID="fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696" Oct 14 09:51:42 crc kubenswrapper[5058]: E1014 09:51:42.487560 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696\": container with ID starting with fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696 not found: ID does not exist" containerID="fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.487607 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696"} err="failed to get container status \"fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696\": rpc error: code = NotFound desc = could not find container \"fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696\": container with ID starting with fa1b094d6b102c2a5373b01f06a6ac54955bac1d26629977f1552efe73a21696 not found: ID does not exist" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.494740 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a23eed-d2e1-422f-acbc-40af4cab721c-logs\") pod \"94a23eed-d2e1-422f-acbc-40af4cab721c\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.494835 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-config-data\") pod \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.494902 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/94a23eed-d2e1-422f-acbc-40af4cab721c-kube-api-access-m588r\") pod \"94a23eed-d2e1-422f-acbc-40af4cab721c\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.494971 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-logs\") pod \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.495016 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-combined-ca-bundle\") pod \"94a23eed-d2e1-422f-acbc-40af4cab721c\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.495076 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-combined-ca-bundle\") pod \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.495198 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-config-data\") pod \"94a23eed-d2e1-422f-acbc-40af4cab721c\" (UID: \"94a23eed-d2e1-422f-acbc-40af4cab721c\") " Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.495258 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9b7m\" (UniqueName: \"kubernetes.io/projected/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-kube-api-access-p9b7m\") pod \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\" (UID: \"85e8ec3d-3c02-4720-a15e-ca297fa8a6b7\") " Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.496930 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-logs" (OuterVolumeSpecName: "logs") pod "85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" (UID: "85e8ec3d-3c02-4720-a15e-ca297fa8a6b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.497156 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a23eed-d2e1-422f-acbc-40af4cab721c-logs" (OuterVolumeSpecName: "logs") pod "94a23eed-d2e1-422f-acbc-40af4cab721c" (UID: "94a23eed-d2e1-422f-acbc-40af4cab721c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.509295 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a23eed-d2e1-422f-acbc-40af4cab721c-kube-api-access-m588r" (OuterVolumeSpecName: "kube-api-access-m588r") pod "94a23eed-d2e1-422f-acbc-40af4cab721c" (UID: "94a23eed-d2e1-422f-acbc-40af4cab721c"). InnerVolumeSpecName "kube-api-access-m588r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.528753 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-kube-api-access-p9b7m" (OuterVolumeSpecName: "kube-api-access-p9b7m") pod "85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" (UID: "85e8ec3d-3c02-4720-a15e-ca297fa8a6b7"). InnerVolumeSpecName "kube-api-access-p9b7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.535325 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" (UID: "85e8ec3d-3c02-4720-a15e-ca297fa8a6b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.540192 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94a23eed-d2e1-422f-acbc-40af4cab721c" (UID: "94a23eed-d2e1-422f-acbc-40af4cab721c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.542654 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-config-data" (OuterVolumeSpecName: "config-data") pod "94a23eed-d2e1-422f-acbc-40af4cab721c" (UID: "94a23eed-d2e1-422f-acbc-40af4cab721c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.557751 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-config-data" (OuterVolumeSpecName: "config-data") pod "85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" (UID: "85e8ec3d-3c02-4720-a15e-ca297fa8a6b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.596555 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m588r\" (UniqueName: \"kubernetes.io/projected/94a23eed-d2e1-422f-acbc-40af4cab721c-kube-api-access-m588r\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.596589 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.596599 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.596608 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.596618 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a23eed-d2e1-422f-acbc-40af4cab721c-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.596627 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9b7m\" (UniqueName: \"kubernetes.io/projected/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-kube-api-access-p9b7m\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.596635 5058 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94a23eed-d2e1-422f-acbc-40af4cab721c-logs\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.596642 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.670084 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:51:42 crc kubenswrapper[5058]: W1014 09:51:42.675484 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01281f7e_5368_4ab7_be69_fcfed8f7a207.slice/crio-7d9ab4c128c2227263a52eb566c427590c8818133abd8d313089c188d4f19d5e WatchSource:0}: Error finding container 7d9ab4c128c2227263a52eb566c427590c8818133abd8d313089c188d4f19d5e: Status 404 returned error can't find the container with id 7d9ab4c128c2227263a52eb566c427590c8818133abd8d313089c188d4f19d5e Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.682888 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:51:42 crc kubenswrapper[5058]: W1014 09:51:42.683176 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78bf9d98_9166_4c93_adfb_f327f12f6d09.slice/crio-8d3e50de5cde0bb36a82699d35cfeafb511411d4f0daf0c1af1745f9e43d4b96 WatchSource:0}: Error finding container 8d3e50de5cde0bb36a82699d35cfeafb511411d4f0daf0c1af1745f9e43d4b96: Status 404 returned error can't find the container with id 8d3e50de5cde0bb36a82699d35cfeafb511411d4f0daf0c1af1745f9e43d4b96 Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.695092 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell3-conductor-0"] Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.723020 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-conductor-0"] Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.756371 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 09:51:42 crc kubenswrapper[5058]: E1014 09:51:42.756862 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-api" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.756879 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-api" Oct 14 09:51:42 crc kubenswrapper[5058]: E1014 09:51:42.756893 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-log" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.756899 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-log" Oct 14 09:51:42 crc kubenswrapper[5058]: E1014 09:51:42.756914 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-metadata" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.756922 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-metadata" Oct 14 09:51:42 crc kubenswrapper[5058]: E1014 09:51:42.756965 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-log" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.756972 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-log" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.757154 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-log" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.757170 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" containerName="nova-api-api" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.757183 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-metadata" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.757195 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" containerName="nova-metadata-log" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.758451 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.761403 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.766311 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.786313 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.800805 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2q5b\" (UniqueName: \"kubernetes.io/projected/a005d70d-1142-4257-8095-6aa5023ed28d-kube-api-access-s2q5b\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.800909 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005d70d-1142-4257-8095-6aa5023ed28d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.800933 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005d70d-1142-4257-8095-6aa5023ed28d-config-data\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.800958 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a005d70d-1142-4257-8095-6aa5023ed28d-logs\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.828126 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a70199b-22be-47f0-b8b1-75b894258c1a" path="/var/lib/kubelet/pods/3a70199b-22be-47f0-b8b1-75b894258c1a/volumes" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.828695 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d7fde3-d8a9-40c9-a83c-4003d297615e" path="/var/lib/kubelet/pods/82d7fde3-d8a9-40c9-a83c-4003d297615e/volumes" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.829339 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e8ec3d-3c02-4720-a15e-ca297fa8a6b7" path="/var/lib/kubelet/pods/85e8ec3d-3c02-4720-a15e-ca297fa8a6b7/volumes" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.830428 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60f1acf-e39d-4169-ab16-0c77fa2d92ef" path="/var/lib/kubelet/pods/b60f1acf-e39d-4169-ab16-0c77fa2d92ef/volumes" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.902788 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005d70d-1142-4257-8095-6aa5023ed28d-config-data\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.903416 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a005d70d-1142-4257-8095-6aa5023ed28d-logs\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.903717 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2q5b\" (UniqueName: \"kubernetes.io/projected/a005d70d-1142-4257-8095-6aa5023ed28d-kube-api-access-s2q5b\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.903941 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005d70d-1142-4257-8095-6aa5023ed28d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.904525 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a005d70d-1142-4257-8095-6aa5023ed28d-logs\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.921894 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005d70d-1142-4257-8095-6aa5023ed28d-config-data\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.922537 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005d70d-1142-4257-8095-6aa5023ed28d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:42 crc kubenswrapper[5058]: I1014 09:51:42.923294 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2q5b\" (UniqueName: \"kubernetes.io/projected/a005d70d-1142-4257-8095-6aa5023ed28d-kube-api-access-s2q5b\") pod \"nova-api-0\" (UID: \"a005d70d-1142-4257-8095-6aa5023ed28d\") " pod="openstack/nova-api-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.079713 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.348468 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e07126c4-2f3b-4001-8e77-5a7555836354","Type":"ContainerStarted","Data":"cbacd26edd6ea35af3ebbe9081f92d56869f10189efecaee39d75052a47f747a"} Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.348911 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e07126c4-2f3b-4001-8e77-5a7555836354","Type":"ContainerStarted","Data":"ef7cd5cf9876f46c81156fbdd2d64fc9eebf564975fc2ea441ac630ca73904ea"} Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.349706 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-0" event={"ID":"78bf9d98-9166-4c93-adfb-f327f12f6d09","Type":"ContainerStarted","Data":"ac2b84499044e263e3f0af85de33cbb4365510c2a127d91146b0624110061a00"} Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.349727 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-conductor-0" event={"ID":"78bf9d98-9166-4c93-adfb-f327f12f6d09","Type":"ContainerStarted","Data":"8d3e50de5cde0bb36a82699d35cfeafb511411d4f0daf0c1af1745f9e43d4b96"} Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.350513 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.352815 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-0" event={"ID":"01281f7e-5368-4ab7-be69-fcfed8f7a207","Type":"ContainerStarted","Data":"eef9a843a2eb0ba1654516af647aaaa9ed1e3d37249a63abb134ccfecfffc465"} Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.352851 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell3-conductor-0" event={"ID":"01281f7e-5368-4ab7-be69-fcfed8f7a207","Type":"ContainerStarted","Data":"7d9ab4c128c2227263a52eb566c427590c8818133abd8d313089c188d4f19d5e"} Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.353624 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.357327 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"94a23eed-d2e1-422f-acbc-40af4cab721c","Type":"ContainerDied","Data":"d28e9bb139b0b6835a79d92b63a2ade7bf931ea4fd887bf0dbe0f640e8ac0334"} Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.357349 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.357360 5058 scope.go:117] "RemoveContainer" containerID="19b2175ddcb06631577fe677e30dab23bd6bfde38844f7815ad521860f9ec564" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.367630 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.36761481 podStartE2EDuration="2.36761481s" podCreationTimestamp="2025-10-14 09:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:51:43.365108429 +0000 UTC m=+11051.276192245" watchObservedRunningTime="2025-10-14 09:51:43.36761481 +0000 UTC m=+11051.278698616" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.396407 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell2-conductor-0" podStartSLOduration=2.396388579 podStartE2EDuration="2.396388579s" podCreationTimestamp="2025-10-14 09:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:51:43.376825802 +0000 UTC m=+11051.287909628" watchObservedRunningTime="2025-10-14 09:51:43.396388579 +0000 UTC m=+11051.307472385" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.399382 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell3-conductor-0" podStartSLOduration=2.399374954 podStartE2EDuration="2.399374954s" podCreationTimestamp="2025-10-14 09:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:51:43.390058899 +0000 UTC m=+11051.301142695" watchObservedRunningTime="2025-10-14 09:51:43.399374954 +0000 UTC m=+11051.310458760" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.408098 5058 scope.go:117] "RemoveContainer" containerID="1b7cabe79a9fb0aa7148bbfcda03eb7a9f05811c4c6ff8244605bd4bac2b1a8b" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.423852 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.435636 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.454056 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.458371 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.462046 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.467754 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.563220 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.625400 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e9a657-7533-48bf-9ff4-a12185627a61-logs\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.625777 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9a657-7533-48bf-9ff4-a12185627a61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.625955 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9a657-7533-48bf-9ff4-a12185627a61-config-data\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.626144 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2l6m\" (UniqueName: \"kubernetes.io/projected/21e9a657-7533-48bf-9ff4-a12185627a61-kube-api-access-w2l6m\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.728514 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9a657-7533-48bf-9ff4-a12185627a61-config-data\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.728637 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2l6m\" (UniqueName: \"kubernetes.io/projected/21e9a657-7533-48bf-9ff4-a12185627a61-kube-api-access-w2l6m\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.728788 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e9a657-7533-48bf-9ff4-a12185627a61-logs\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.728927 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9a657-7533-48bf-9ff4-a12185627a61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.729183 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21e9a657-7533-48bf-9ff4-a12185627a61-logs\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.994446 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9a657-7533-48bf-9ff4-a12185627a61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.994540 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9a657-7533-48bf-9ff4-a12185627a61-config-data\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:43 crc kubenswrapper[5058]: I1014 09:51:43.994861 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2l6m\" (UniqueName: \"kubernetes.io/projected/21e9a657-7533-48bf-9ff4-a12185627a61-kube-api-access-w2l6m\") pod \"nova-metadata-0\" (UID: \"21e9a657-7533-48bf-9ff4-a12185627a61\") " pod="openstack/nova-metadata-0" Oct 14 09:51:44 crc kubenswrapper[5058]: I1014 09:51:44.085362 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 09:51:44 crc kubenswrapper[5058]: I1014 09:51:44.378337 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a005d70d-1142-4257-8095-6aa5023ed28d","Type":"ContainerStarted","Data":"8b337077d5cec22378c5a198c88be8bcd1a5f8daf851a227b39f2b5ff0117149"} Oct 14 09:51:44 crc kubenswrapper[5058]: I1014 09:51:44.378982 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a005d70d-1142-4257-8095-6aa5023ed28d","Type":"ContainerStarted","Data":"36731381c9a1a82aaddb04249a5554d8096661fa514d07948a0287c0fc3e07f8"} Oct 14 09:51:44 crc kubenswrapper[5058]: W1014 09:51:44.604730 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21e9a657_7533_48bf_9ff4_a12185627a61.slice/crio-9ab25268b7cd1c06a4adb1e850f5ac804efdd6c58be3aa77d9c9e374e021de54 WatchSource:0}: Error finding container 9ab25268b7cd1c06a4adb1e850f5ac804efdd6c58be3aa77d9c9e374e021de54: Status 404 returned error can't find the container with id 9ab25268b7cd1c06a4adb1e850f5ac804efdd6c58be3aa77d9c9e374e021de54 Oct 14 09:51:44 crc kubenswrapper[5058]: I1014 09:51:44.611932 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 09:51:44 crc kubenswrapper[5058]: I1014 09:51:44.800751 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a23eed-d2e1-422f-acbc-40af4cab721c" path="/var/lib/kubelet/pods/94a23eed-d2e1-422f-acbc-40af4cab721c/volumes" Oct 14 09:51:45 crc kubenswrapper[5058]: I1014 09:51:45.397983 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a005d70d-1142-4257-8095-6aa5023ed28d","Type":"ContainerStarted","Data":"a71c8faac05f705c8029561e39032a5741634ff3491530bea54ff82ac304c3e3"} Oct 14 09:51:45 crc kubenswrapper[5058]: I1014 09:51:45.401513 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"21e9a657-7533-48bf-9ff4-a12185627a61","Type":"ContainerStarted","Data":"0c48c4806f40847a8c8eea5b5461bcab58724c0accb5f51a8e7ccddc9a6d4d65"} Oct 14 09:51:45 crc kubenswrapper[5058]: I1014 09:51:45.401548 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"21e9a657-7533-48bf-9ff4-a12185627a61","Type":"ContainerStarted","Data":"204e8b92a4a0e7e48cd9d8ac7d2b957998f1189cc13e3f01532f6024be7da598"} Oct 14 09:51:45 crc kubenswrapper[5058]: I1014 09:51:45.401558 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"21e9a657-7533-48bf-9ff4-a12185627a61","Type":"ContainerStarted","Data":"9ab25268b7cd1c06a4adb1e850f5ac804efdd6c58be3aa77d9c9e374e021de54"} Oct 14 09:51:45 crc kubenswrapper[5058]: I1014 09:51:45.423281 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.423258724 podStartE2EDuration="3.423258724s" podCreationTimestamp="2025-10-14 09:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:51:45.4161053 +0000 UTC m=+11053.327189116" watchObservedRunningTime="2025-10-14 09:51:45.423258724 +0000 UTC m=+11053.334342540" Oct 14 09:51:45 crc kubenswrapper[5058]: I1014 09:51:45.439710 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.439682441 podStartE2EDuration="2.439682441s" podCreationTimestamp="2025-10-14 09:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 09:51:45.430808219 +0000 UTC m=+11053.341892055" watchObservedRunningTime="2025-10-14 09:51:45.439682441 +0000 UTC m=+11053.350766247" Oct 14 09:51:46 crc kubenswrapper[5058]: I1014 09:51:46.853457 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 09:51:48 crc kubenswrapper[5058]: I1014 09:51:48.655930 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 14 09:51:49 crc kubenswrapper[5058]: I1014 09:51:49.086141 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 09:51:49 crc kubenswrapper[5058]: I1014 09:51:49.086219 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 09:51:49 crc kubenswrapper[5058]: I1014 09:51:49.969914 5058 scope.go:117] "RemoveContainer" containerID="38bd7e9f5f204dca7ed11cec45a145977118f5eb5309800e690b1916e8bdf20b" Oct 14 09:51:50 crc kubenswrapper[5058]: I1014 09:51:50.816251 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 14 09:51:51 crc kubenswrapper[5058]: I1014 09:51:51.853058 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 09:51:51 crc kubenswrapper[5058]: I1014 09:51:51.856617 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell3-conductor-0" Oct 14 09:51:51 crc kubenswrapper[5058]: I1014 09:51:51.894066 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell2-conductor-0" Oct 14 09:51:51 crc kubenswrapper[5058]: I1014 09:51:51.918711 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 09:51:52 crc kubenswrapper[5058]: I1014 09:51:52.573392 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 09:51:53 crc kubenswrapper[5058]: I1014 09:51:53.080020 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 09:51:53 crc kubenswrapper[5058]: I1014 09:51:53.080103 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 09:51:54 crc kubenswrapper[5058]: I1014 09:51:54.086785 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 09:51:54 crc kubenswrapper[5058]: I1014 09:51:54.086828 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 09:51:54 crc kubenswrapper[5058]: I1014 09:51:54.165040 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a005d70d-1142-4257-8095-6aa5023ed28d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:51:54 crc kubenswrapper[5058]: I1014 09:51:54.165129 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a005d70d-1142-4257-8095-6aa5023ed28d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:51:55 crc kubenswrapper[5058]: I1014 09:51:55.169103 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="21e9a657-7533-48bf-9ff4-a12185627a61" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.235:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:51:55 crc kubenswrapper[5058]: I1014 09:51:55.169114 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="21e9a657-7533-48bf-9ff4-a12185627a61" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.235:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 09:52:03 crc kubenswrapper[5058]: I1014 09:52:03.086452 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 09:52:03 crc kubenswrapper[5058]: I1014 09:52:03.087143 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 09:52:03 crc kubenswrapper[5058]: I1014 09:52:03.087598 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 09:52:03 crc kubenswrapper[5058]: I1014 09:52:03.087651 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 09:52:03 crc kubenswrapper[5058]: I1014 09:52:03.093096 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 09:52:03 crc kubenswrapper[5058]: I1014 09:52:03.095954 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 09:52:04 crc kubenswrapper[5058]: I1014 09:52:04.089703 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 09:52:04 crc kubenswrapper[5058]: I1014 09:52:04.091433 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 09:52:04 crc kubenswrapper[5058]: I1014 09:52:04.095303 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 09:52:04 crc kubenswrapper[5058]: I1014 09:52:04.694731 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.443162 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr"] Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.445894 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.447765 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.449428 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.449561 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.449688 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jdms2" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.449822 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.451075 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.451356 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.459660 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr"] Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.570305 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56tmn\" (UniqueName: \"kubernetes.io/projected/2c20d114-97e4-4d3e-a7a0-40034e38b794-kube-api-access-56tmn\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.570433 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.570478 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.570557 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.570661 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.570712 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.570912 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.571055 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.571174 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.673475 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.673591 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.673641 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.673694 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56tmn\" (UniqueName: \"kubernetes.io/projected/2c20d114-97e4-4d3e-a7a0-40034e38b794-kube-api-access-56tmn\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.673723 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.673750 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.673790 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.673891 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.673936 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.675249 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.681413 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.681866 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.681976 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.682055 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.683163 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.684092 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.685945 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.697044 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56tmn\" (UniqueName: \"kubernetes.io/projected/2c20d114-97e4-4d3e-a7a0-40034e38b794-kube-api-access-56tmn\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.778504 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7"] Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.780343 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.782875 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell2" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.784448 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.794640 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell2-dockercfg-d76g7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.796157 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell2-compute-config" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.827126 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7"] Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.877519 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-1\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.877563 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-1\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.877653 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cells-global-config-0\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.878218 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-combined-ca-bundle\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.878514 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-0\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.878536 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-0\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.878588 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-inventory\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.878638 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-ssh-key\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.878663 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snj6t\" (UniqueName: \"kubernetes.io/projected/5a0193e1-89dc-4ce4-a87c-a16916218a8a-kube-api-access-snj6t\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.980217 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-ssh-key\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.980276 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snj6t\" (UniqueName: \"kubernetes.io/projected/5a0193e1-89dc-4ce4-a87c-a16916218a8a-kube-api-access-snj6t\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.980339 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-1\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.980372 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-1\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.980451 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cells-global-config-0\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.980517 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-combined-ca-bundle\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.980553 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-0\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.980577 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-0\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.980626 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-inventory\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.982668 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cells-global-config-0\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.985759 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-ssh-key\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.987021 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-1\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.987681 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-combined-ca-bundle\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.988463 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-1\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.988582 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-0\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.989345 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-inventory\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:06 crc kubenswrapper[5058]: I1014 09:52:06.999405 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-0\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:07 crc kubenswrapper[5058]: I1014 09:52:07.002409 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snj6t\" (UniqueName: \"kubernetes.io/projected/5a0193e1-89dc-4ce4-a87c-a16916218a8a-kube-api-access-snj6t\") pod \"nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:07 crc kubenswrapper[5058]: I1014 09:52:07.133730 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:52:07 crc kubenswrapper[5058]: I1014 09:52:07.430626 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr"] Oct 14 09:52:07 crc kubenswrapper[5058]: I1014 09:52:07.753912 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" event={"ID":"2c20d114-97e4-4d3e-a7a0-40034e38b794","Type":"ContainerStarted","Data":"93984dae71d12b7910a0b0c1e4ce9ef8e9184ff2fbb2b86cba17eaaf629a4fc2"} Oct 14 09:52:07 crc kubenswrapper[5058]: W1014 09:52:07.763319 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a0193e1_89dc_4ce4_a87c_a16916218a8a.slice/crio-02535378fd12b5e7926db71ecc00b45955eea20a54e5f271c5d752225d94b018 WatchSource:0}: Error finding container 02535378fd12b5e7926db71ecc00b45955eea20a54e5f271c5d752225d94b018: Status 404 returned error can't find the container with id 02535378fd12b5e7926db71ecc00b45955eea20a54e5f271c5d752225d94b018 Oct 14 09:52:07 crc kubenswrapper[5058]: I1014 09:52:07.767412 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7"] Oct 14 09:52:08 crc kubenswrapper[5058]: I1014 09:52:08.768339 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" event={"ID":"5a0193e1-89dc-4ce4-a87c-a16916218a8a","Type":"ContainerStarted","Data":"a5f508f8358d5ded69de0f842c44756f18850efc8ccb7e34bd6b4c327c0b1597"} Oct 14 09:52:08 crc kubenswrapper[5058]: I1014 09:52:08.768901 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" event={"ID":"5a0193e1-89dc-4ce4-a87c-a16916218a8a","Type":"ContainerStarted","Data":"02535378fd12b5e7926db71ecc00b45955eea20a54e5f271c5d752225d94b018"} Oct 14 09:52:08 crc kubenswrapper[5058]: I1014 09:52:08.772642 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" event={"ID":"2c20d114-97e4-4d3e-a7a0-40034e38b794","Type":"ContainerStarted","Data":"482dbbf980e1e8e570cc106ac3daa27589deae677400b3449f239de53a24df19"} Oct 14 09:52:08 crc kubenswrapper[5058]: I1014 09:52:08.791273 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" podStartSLOduration=2.23252435 podStartE2EDuration="2.791246036s" podCreationTimestamp="2025-10-14 09:52:06 +0000 UTC" firstStartedPulling="2025-10-14 09:52:07.767334145 +0000 UTC m=+11075.678417951" lastFinishedPulling="2025-10-14 09:52:08.326055841 +0000 UTC m=+11076.237139637" observedRunningTime="2025-10-14 09:52:08.789170187 +0000 UTC m=+11076.700254033" watchObservedRunningTime="2025-10-14 09:52:08.791246036 +0000 UTC m=+11076.702329872" Oct 14 09:52:08 crc kubenswrapper[5058]: I1014 09:52:08.826752 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" podStartSLOduration=2.350369313 podStartE2EDuration="2.826733036s" podCreationTimestamp="2025-10-14 09:52:06 +0000 UTC" firstStartedPulling="2025-10-14 09:52:07.416335459 +0000 UTC m=+11075.327419275" lastFinishedPulling="2025-10-14 09:52:07.892699172 +0000 UTC m=+11075.803782998" observedRunningTime="2025-10-14 09:52:08.81634883 +0000 UTC m=+11076.727432676" watchObservedRunningTime="2025-10-14 09:52:08.826733036 +0000 UTC m=+11076.737816862" Oct 14 09:53:03 crc kubenswrapper[5058]: I1014 09:53:03.656645 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:53:03 crc kubenswrapper[5058]: I1014 09:53:03.657465 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:53:33 crc kubenswrapper[5058]: I1014 09:53:33.655995 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:53:33 crc kubenswrapper[5058]: I1014 09:53:33.656602 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:54:03 crc kubenswrapper[5058]: I1014 09:54:03.656482 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:54:03 crc kubenswrapper[5058]: I1014 09:54:03.657522 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:54:03 crc kubenswrapper[5058]: I1014 09:54:03.657612 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:54:03 crc kubenswrapper[5058]: I1014 09:54:03.659847 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a2cedfce5628ab51803f44e76505a9365df9ed4bd2f29645beb563cf71925b7"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:54:03 crc kubenswrapper[5058]: I1014 09:54:03.660062 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://0a2cedfce5628ab51803f44e76505a9365df9ed4bd2f29645beb563cf71925b7" gracePeriod=600 Oct 14 09:54:04 crc kubenswrapper[5058]: I1014 09:54:04.346815 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="0a2cedfce5628ab51803f44e76505a9365df9ed4bd2f29645beb563cf71925b7" exitCode=0 Oct 14 09:54:04 crc kubenswrapper[5058]: I1014 09:54:04.346842 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"0a2cedfce5628ab51803f44e76505a9365df9ed4bd2f29645beb563cf71925b7"} Oct 14 09:54:04 crc kubenswrapper[5058]: I1014 09:54:04.347250 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c"} Oct 14 09:54:04 crc kubenswrapper[5058]: I1014 09:54:04.347286 5058 scope.go:117] "RemoveContainer" containerID="922d2e0187f8a5263b71deb503e48d230846acd3c7f6c8138cdf43bf803fb35f" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.352324 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6nkwc"] Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.357537 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.371959 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nkwc"] Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.490397 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx285\" (UniqueName: \"kubernetes.io/projected/9569ec0e-f110-4246-8840-424d1df228d4-kube-api-access-cx285\") pod \"redhat-operators-6nkwc\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.490855 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-utilities\") pod \"redhat-operators-6nkwc\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.491080 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-catalog-content\") pod \"redhat-operators-6nkwc\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.592605 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-utilities\") pod \"redhat-operators-6nkwc\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.593033 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-catalog-content\") pod \"redhat-operators-6nkwc\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.593104 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx285\" (UniqueName: \"kubernetes.io/projected/9569ec0e-f110-4246-8840-424d1df228d4-kube-api-access-cx285\") pod \"redhat-operators-6nkwc\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.594399 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-catalog-content\") pod \"redhat-operators-6nkwc\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.594723 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-utilities\") pod \"redhat-operators-6nkwc\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.615230 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx285\" (UniqueName: \"kubernetes.io/projected/9569ec0e-f110-4246-8840-424d1df228d4-kube-api-access-cx285\") pod \"redhat-operators-6nkwc\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:47 crc kubenswrapper[5058]: I1014 09:54:47.729437 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:48 crc kubenswrapper[5058]: I1014 09:54:48.226612 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nkwc"] Oct 14 09:54:48 crc kubenswrapper[5058]: I1014 09:54:48.955690 5058 generic.go:334] "Generic (PLEG): container finished" podID="9569ec0e-f110-4246-8840-424d1df228d4" containerID="f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75" exitCode=0 Oct 14 09:54:48 crc kubenswrapper[5058]: I1014 09:54:48.955761 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nkwc" event={"ID":"9569ec0e-f110-4246-8840-424d1df228d4","Type":"ContainerDied","Data":"f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75"} Oct 14 09:54:48 crc kubenswrapper[5058]: I1014 09:54:48.956248 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nkwc" event={"ID":"9569ec0e-f110-4246-8840-424d1df228d4","Type":"ContainerStarted","Data":"f00e63a20d93fd398e0955e9db55696fadd053c34a0a335f234d7a97e51e1934"} Oct 14 09:54:48 crc kubenswrapper[5058]: I1014 09:54:48.958138 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 09:54:49 crc kubenswrapper[5058]: I1014 09:54:49.973216 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nkwc" event={"ID":"9569ec0e-f110-4246-8840-424d1df228d4","Type":"ContainerStarted","Data":"62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551"} Oct 14 09:54:53 crc kubenswrapper[5058]: I1014 09:54:53.016590 5058 generic.go:334] "Generic (PLEG): container finished" podID="9569ec0e-f110-4246-8840-424d1df228d4" containerID="62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551" exitCode=0 Oct 14 09:54:53 crc kubenswrapper[5058]: I1014 09:54:53.016707 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nkwc" event={"ID":"9569ec0e-f110-4246-8840-424d1df228d4","Type":"ContainerDied","Data":"62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551"} Oct 14 09:54:54 crc kubenswrapper[5058]: I1014 09:54:54.035925 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nkwc" event={"ID":"9569ec0e-f110-4246-8840-424d1df228d4","Type":"ContainerStarted","Data":"204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f"} Oct 14 09:54:54 crc kubenswrapper[5058]: I1014 09:54:54.065421 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6nkwc" podStartSLOduration=2.308124655 podStartE2EDuration="7.065394822s" podCreationTimestamp="2025-10-14 09:54:47 +0000 UTC" firstStartedPulling="2025-10-14 09:54:48.957823088 +0000 UTC m=+11236.868906904" lastFinishedPulling="2025-10-14 09:54:53.715093255 +0000 UTC m=+11241.626177071" observedRunningTime="2025-10-14 09:54:54.055471429 +0000 UTC m=+11241.966555255" watchObservedRunningTime="2025-10-14 09:54:54.065394822 +0000 UTC m=+11241.976478638" Oct 14 09:54:57 crc kubenswrapper[5058]: I1014 09:54:57.729740 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:57 crc kubenswrapper[5058]: I1014 09:54:57.730392 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:54:58 crc kubenswrapper[5058]: I1014 09:54:58.799744 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6nkwc" podUID="9569ec0e-f110-4246-8840-424d1df228d4" containerName="registry-server" probeResult="failure" output=< Oct 14 09:54:58 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 09:54:58 crc kubenswrapper[5058]: > Oct 14 09:55:07 crc kubenswrapper[5058]: I1014 09:55:07.817719 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:55:07 crc kubenswrapper[5058]: I1014 09:55:07.912338 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:55:08 crc kubenswrapper[5058]: I1014 09:55:08.086615 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nkwc"] Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.240418 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6nkwc" podUID="9569ec0e-f110-4246-8840-424d1df228d4" containerName="registry-server" containerID="cri-o://204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f" gracePeriod=2 Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.779151 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.885081 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-utilities\") pod \"9569ec0e-f110-4246-8840-424d1df228d4\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.885190 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx285\" (UniqueName: \"kubernetes.io/projected/9569ec0e-f110-4246-8840-424d1df228d4-kube-api-access-cx285\") pod \"9569ec0e-f110-4246-8840-424d1df228d4\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.885239 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-catalog-content\") pod \"9569ec0e-f110-4246-8840-424d1df228d4\" (UID: \"9569ec0e-f110-4246-8840-424d1df228d4\") " Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.886434 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-utilities" (OuterVolumeSpecName: "utilities") pod "9569ec0e-f110-4246-8840-424d1df228d4" (UID: "9569ec0e-f110-4246-8840-424d1df228d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.895084 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9569ec0e-f110-4246-8840-424d1df228d4-kube-api-access-cx285" (OuterVolumeSpecName: "kube-api-access-cx285") pod "9569ec0e-f110-4246-8840-424d1df228d4" (UID: "9569ec0e-f110-4246-8840-424d1df228d4"). InnerVolumeSpecName "kube-api-access-cx285". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.988531 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.988568 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx285\" (UniqueName: \"kubernetes.io/projected/9569ec0e-f110-4246-8840-424d1df228d4-kube-api-access-cx285\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:09 crc kubenswrapper[5058]: I1014 09:55:09.992369 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9569ec0e-f110-4246-8840-424d1df228d4" (UID: "9569ec0e-f110-4246-8840-424d1df228d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.091617 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569ec0e-f110-4246-8840-424d1df228d4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.263325 5058 generic.go:334] "Generic (PLEG): container finished" podID="9569ec0e-f110-4246-8840-424d1df228d4" containerID="204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f" exitCode=0 Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.263409 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nkwc" event={"ID":"9569ec0e-f110-4246-8840-424d1df228d4","Type":"ContainerDied","Data":"204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f"} Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.263435 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nkwc" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.263479 5058 scope.go:117] "RemoveContainer" containerID="204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.263459 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nkwc" event={"ID":"9569ec0e-f110-4246-8840-424d1df228d4","Type":"ContainerDied","Data":"f00e63a20d93fd398e0955e9db55696fadd053c34a0a335f234d7a97e51e1934"} Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.305339 5058 scope.go:117] "RemoveContainer" containerID="62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.321851 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nkwc"] Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.331708 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6nkwc"] Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.339018 5058 scope.go:117] "RemoveContainer" containerID="f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.398375 5058 scope.go:117] "RemoveContainer" containerID="204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f" Oct 14 09:55:10 crc kubenswrapper[5058]: E1014 09:55:10.398784 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f\": container with ID starting with 204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f not found: ID does not exist" containerID="204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.398893 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f"} err="failed to get container status \"204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f\": rpc error: code = NotFound desc = could not find container \"204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f\": container with ID starting with 204e7f2b435da893f39472032d8e9e7d42a55c79d6d96d77e66c4140d32ffa2f not found: ID does not exist" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.398930 5058 scope.go:117] "RemoveContainer" containerID="62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551" Oct 14 09:55:10 crc kubenswrapper[5058]: E1014 09:55:10.401551 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551\": container with ID starting with 62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551 not found: ID does not exist" containerID="62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.401600 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551"} err="failed to get container status \"62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551\": rpc error: code = NotFound desc = could not find container \"62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551\": container with ID starting with 62877d169b3dc0eccb4d9a7cff053577dcf655b069693fa27ccf5f1ce601e551 not found: ID does not exist" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.401630 5058 scope.go:117] "RemoveContainer" containerID="f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75" Oct 14 09:55:10 crc kubenswrapper[5058]: E1014 09:55:10.402256 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75\": container with ID starting with f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75 not found: ID does not exist" containerID="f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.402278 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75"} err="failed to get container status \"f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75\": rpc error: code = NotFound desc = could not find container \"f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75\": container with ID starting with f92dac9b74335f9b7bd52e4510e1f0e7b8f11aea703b005e7eaeda0c28f80c75 not found: ID does not exist" Oct 14 09:55:10 crc kubenswrapper[5058]: I1014 09:55:10.816002 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9569ec0e-f110-4246-8840-424d1df228d4" path="/var/lib/kubelet/pods/9569ec0e-f110-4246-8840-424d1df228d4/volumes" Oct 14 09:55:11 crc kubenswrapper[5058]: I1014 09:55:11.277414 5058 generic.go:334] "Generic (PLEG): container finished" podID="2c20d114-97e4-4d3e-a7a0-40034e38b794" containerID="482dbbf980e1e8e570cc106ac3daa27589deae677400b3449f239de53a24df19" exitCode=0 Oct 14 09:55:11 crc kubenswrapper[5058]: I1014 09:55:11.277453 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" event={"ID":"2c20d114-97e4-4d3e-a7a0-40034e38b794","Type":"ContainerDied","Data":"482dbbf980e1e8e570cc106ac3daa27589deae677400b3449f239de53a24df19"} Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.823154 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.966227 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-1\") pod \"2c20d114-97e4-4d3e-a7a0-40034e38b794\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.967005 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-0\") pod \"2c20d114-97e4-4d3e-a7a0-40034e38b794\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.967274 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56tmn\" (UniqueName: \"kubernetes.io/projected/2c20d114-97e4-4d3e-a7a0-40034e38b794-kube-api-access-56tmn\") pod \"2c20d114-97e4-4d3e-a7a0-40034e38b794\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.967498 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-combined-ca-bundle\") pod \"2c20d114-97e4-4d3e-a7a0-40034e38b794\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.967741 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cells-global-config-0\") pod \"2c20d114-97e4-4d3e-a7a0-40034e38b794\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.967970 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-inventory\") pod \"2c20d114-97e4-4d3e-a7a0-40034e38b794\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.968225 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-1\") pod \"2c20d114-97e4-4d3e-a7a0-40034e38b794\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.968494 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-0\") pod \"2c20d114-97e4-4d3e-a7a0-40034e38b794\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.968724 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-ssh-key\") pod \"2c20d114-97e4-4d3e-a7a0-40034e38b794\" (UID: \"2c20d114-97e4-4d3e-a7a0-40034e38b794\") " Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.972942 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "2c20d114-97e4-4d3e-a7a0-40034e38b794" (UID: "2c20d114-97e4-4d3e-a7a0-40034e38b794"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.978386 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c20d114-97e4-4d3e-a7a0-40034e38b794-kube-api-access-56tmn" (OuterVolumeSpecName: "kube-api-access-56tmn") pod "2c20d114-97e4-4d3e-a7a0-40034e38b794" (UID: "2c20d114-97e4-4d3e-a7a0-40034e38b794"). InnerVolumeSpecName "kube-api-access-56tmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:55:12 crc kubenswrapper[5058]: I1014 09:55:12.999114 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2c20d114-97e4-4d3e-a7a0-40034e38b794" (UID: "2c20d114-97e4-4d3e-a7a0-40034e38b794"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.014385 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "2c20d114-97e4-4d3e-a7a0-40034e38b794" (UID: "2c20d114-97e4-4d3e-a7a0-40034e38b794"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.016102 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2c20d114-97e4-4d3e-a7a0-40034e38b794" (UID: "2c20d114-97e4-4d3e-a7a0-40034e38b794"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.023834 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2c20d114-97e4-4d3e-a7a0-40034e38b794" (UID: "2c20d114-97e4-4d3e-a7a0-40034e38b794"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.025823 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2c20d114-97e4-4d3e-a7a0-40034e38b794" (UID: "2c20d114-97e4-4d3e-a7a0-40034e38b794"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.029635 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-inventory" (OuterVolumeSpecName: "inventory") pod "2c20d114-97e4-4d3e-a7a0-40034e38b794" (UID: "2c20d114-97e4-4d3e-a7a0-40034e38b794"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.034925 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2c20d114-97e4-4d3e-a7a0-40034e38b794" (UID: "2c20d114-97e4-4d3e-a7a0-40034e38b794"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.072054 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.072092 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56tmn\" (UniqueName: \"kubernetes.io/projected/2c20d114-97e4-4d3e-a7a0-40034e38b794-kube-api-access-56tmn\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.072104 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.072117 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.072130 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.072143 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.072154 5058 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.072164 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.072176 5058 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2c20d114-97e4-4d3e-a7a0-40034e38b794-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.305775 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" event={"ID":"2c20d114-97e4-4d3e-a7a0-40034e38b794","Type":"ContainerDied","Data":"93984dae71d12b7910a0b0c1e4ce9ef8e9184ff2fbb2b86cba17eaaf629a4fc2"} Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.305847 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93984dae71d12b7910a0b0c1e4ce9ef8e9184ff2fbb2b86cba17eaaf629a4fc2" Oct 14 09:55:13 crc kubenswrapper[5058]: I1014 09:55:13.305883 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr" Oct 14 09:55:14 crc kubenswrapper[5058]: I1014 09:55:14.318597 5058 generic.go:334] "Generic (PLEG): container finished" podID="5a0193e1-89dc-4ce4-a87c-a16916218a8a" containerID="a5f508f8358d5ded69de0f842c44756f18850efc8ccb7e34bd6b4c327c0b1597" exitCode=0 Oct 14 09:55:14 crc kubenswrapper[5058]: I1014 09:55:14.318642 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" event={"ID":"5a0193e1-89dc-4ce4-a87c-a16916218a8a","Type":"ContainerDied","Data":"a5f508f8358d5ded69de0f842c44756f18850efc8ccb7e34bd6b4c327c0b1597"} Oct 14 09:55:15 crc kubenswrapper[5058]: I1014 09:55:15.881432 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.041237 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-combined-ca-bundle\") pod \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.041309 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snj6t\" (UniqueName: \"kubernetes.io/projected/5a0193e1-89dc-4ce4-a87c-a16916218a8a-kube-api-access-snj6t\") pod \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.041349 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-1\") pod \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.041408 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-0\") pod \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.041448 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cells-global-config-0\") pod \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.042352 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-0\") pod \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.042420 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-inventory\") pod \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.042484 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-ssh-key\") pod \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.042522 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-1\") pod \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\" (UID: \"5a0193e1-89dc-4ce4-a87c-a16916218a8a\") " Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.047481 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell2-combined-ca-bundle") pod "5a0193e1-89dc-4ce4-a87c-a16916218a8a" (UID: "5a0193e1-89dc-4ce4-a87c-a16916218a8a"). InnerVolumeSpecName "nova-cell2-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.050481 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a0193e1-89dc-4ce4-a87c-a16916218a8a-kube-api-access-snj6t" (OuterVolumeSpecName: "kube-api-access-snj6t") pod "5a0193e1-89dc-4ce4-a87c-a16916218a8a" (UID: "5a0193e1-89dc-4ce4-a87c-a16916218a8a"). InnerVolumeSpecName "kube-api-access-snj6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.073055 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5a0193e1-89dc-4ce4-a87c-a16916218a8a" (UID: "5a0193e1-89dc-4ce4-a87c-a16916218a8a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.084338 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5a0193e1-89dc-4ce4-a87c-a16916218a8a" (UID: "5a0193e1-89dc-4ce4-a87c-a16916218a8a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.089350 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5a0193e1-89dc-4ce4-a87c-a16916218a8a" (UID: "5a0193e1-89dc-4ce4-a87c-a16916218a8a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.094113 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "5a0193e1-89dc-4ce4-a87c-a16916218a8a" (UID: "5a0193e1-89dc-4ce4-a87c-a16916218a8a"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.101212 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-1" (OuterVolumeSpecName: "nova-cell2-compute-config-1") pod "5a0193e1-89dc-4ce4-a87c-a16916218a8a" (UID: "5a0193e1-89dc-4ce4-a87c-a16916218a8a"). InnerVolumeSpecName "nova-cell2-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.101577 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-0" (OuterVolumeSpecName: "nova-cell2-compute-config-0") pod "5a0193e1-89dc-4ce4-a87c-a16916218a8a" (UID: "5a0193e1-89dc-4ce4-a87c-a16916218a8a"). InnerVolumeSpecName "nova-cell2-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.111941 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-inventory" (OuterVolumeSpecName: "inventory") pod "5a0193e1-89dc-4ce4-a87c-a16916218a8a" (UID: "5a0193e1-89dc-4ce4-a87c-a16916218a8a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.145484 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell2-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.145866 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snj6t\" (UniqueName: \"kubernetes.io/projected/5a0193e1-89dc-4ce4-a87c-a16916218a8a-kube-api-access-snj6t\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.145980 5058 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.146130 5058 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.146243 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.146340 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell2-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.146437 5058 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.146542 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.146644 5058 reconciler_common.go:293] "Volume detached for volume \"nova-cell2-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0193e1-89dc-4ce4-a87c-a16916218a8a-nova-cell2-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.342470 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" event={"ID":"5a0193e1-89dc-4ce4-a87c-a16916218a8a","Type":"ContainerDied","Data":"02535378fd12b5e7926db71ecc00b45955eea20a54e5f271c5d752225d94b018"} Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.342524 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02535378fd12b5e7926db71ecc00b45955eea20a54e5f271c5d752225d94b018" Oct 14 09:55:16 crc kubenswrapper[5058]: I1014 09:55:16.342579 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7" Oct 14 09:56:03 crc kubenswrapper[5058]: I1014 09:56:03.656673 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:56:03 crc kubenswrapper[5058]: I1014 09:56:03.657519 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.776854 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4489r"] Oct 14 09:56:28 crc kubenswrapper[5058]: E1014 09:56:28.777893 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9569ec0e-f110-4246-8840-424d1df228d4" containerName="extract-utilities" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.777907 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9569ec0e-f110-4246-8840-424d1df228d4" containerName="extract-utilities" Oct 14 09:56:28 crc kubenswrapper[5058]: E1014 09:56:28.777920 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9569ec0e-f110-4246-8840-424d1df228d4" containerName="registry-server" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.777926 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9569ec0e-f110-4246-8840-424d1df228d4" containerName="registry-server" Oct 14 09:56:28 crc kubenswrapper[5058]: E1014 09:56:28.777950 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0193e1-89dc-4ce4-a87c-a16916218a8a" containerName="nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cell2" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.777957 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0193e1-89dc-4ce4-a87c-a16916218a8a" containerName="nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cell2" Oct 14 09:56:28 crc kubenswrapper[5058]: E1014 09:56:28.777969 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c20d114-97e4-4d3e-a7a0-40034e38b794" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.777975 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c20d114-97e4-4d3e-a7a0-40034e38b794" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Oct 14 09:56:28 crc kubenswrapper[5058]: E1014 09:56:28.777998 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9569ec0e-f110-4246-8840-424d1df228d4" containerName="extract-content" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.778004 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9569ec0e-f110-4246-8840-424d1df228d4" containerName="extract-content" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.778281 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a0193e1-89dc-4ce4-a87c-a16916218a8a" containerName="nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cell2" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.778324 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c20d114-97e4-4d3e-a7a0-40034e38b794" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.778339 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9569ec0e-f110-4246-8840-424d1df228d4" containerName="registry-server" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.780515 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.834933 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4489r"] Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.862996 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-catalog-content\") pod \"redhat-marketplace-4489r\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.863403 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rrqb\" (UniqueName: \"kubernetes.io/projected/265449a0-86aa-4df5-a8e8-adb500fd1d39-kube-api-access-5rrqb\") pod \"redhat-marketplace-4489r\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.863440 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-utilities\") pod \"redhat-marketplace-4489r\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.966248 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-catalog-content\") pod \"redhat-marketplace-4489r\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.966397 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rrqb\" (UniqueName: \"kubernetes.io/projected/265449a0-86aa-4df5-a8e8-adb500fd1d39-kube-api-access-5rrqb\") pod \"redhat-marketplace-4489r\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.966451 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-utilities\") pod \"redhat-marketplace-4489r\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.966944 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-catalog-content\") pod \"redhat-marketplace-4489r\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.967207 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-utilities\") pod \"redhat-marketplace-4489r\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:28 crc kubenswrapper[5058]: I1014 09:56:28.994538 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rrqb\" (UniqueName: \"kubernetes.io/projected/265449a0-86aa-4df5-a8e8-adb500fd1d39-kube-api-access-5rrqb\") pod \"redhat-marketplace-4489r\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:29 crc kubenswrapper[5058]: I1014 09:56:29.112309 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:29 crc kubenswrapper[5058]: I1014 09:56:29.627856 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4489r"] Oct 14 09:56:30 crc kubenswrapper[5058]: I1014 09:56:30.319516 5058 generic.go:334] "Generic (PLEG): container finished" podID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerID="65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450" exitCode=0 Oct 14 09:56:30 crc kubenswrapper[5058]: I1014 09:56:30.319873 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4489r" event={"ID":"265449a0-86aa-4df5-a8e8-adb500fd1d39","Type":"ContainerDied","Data":"65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450"} Oct 14 09:56:30 crc kubenswrapper[5058]: I1014 09:56:30.319907 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4489r" event={"ID":"265449a0-86aa-4df5-a8e8-adb500fd1d39","Type":"ContainerStarted","Data":"bc642746daf4cee91cef00b27c03dc214e36d463543410c4c69abe6b12ad7bd0"} Oct 14 09:56:31 crc kubenswrapper[5058]: I1014 09:56:31.374148 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4489r" event={"ID":"265449a0-86aa-4df5-a8e8-adb500fd1d39","Type":"ContainerStarted","Data":"2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8"} Oct 14 09:56:32 crc kubenswrapper[5058]: I1014 09:56:32.389641 5058 generic.go:334] "Generic (PLEG): container finished" podID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerID="2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8" exitCode=0 Oct 14 09:56:32 crc kubenswrapper[5058]: I1014 09:56:32.389757 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4489r" event={"ID":"265449a0-86aa-4df5-a8e8-adb500fd1d39","Type":"ContainerDied","Data":"2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8"} Oct 14 09:56:33 crc kubenswrapper[5058]: I1014 09:56:33.402779 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4489r" event={"ID":"265449a0-86aa-4df5-a8e8-adb500fd1d39","Type":"ContainerStarted","Data":"5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74"} Oct 14 09:56:33 crc kubenswrapper[5058]: I1014 09:56:33.437460 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4489r" podStartSLOduration=2.887399667 podStartE2EDuration="5.437432497s" podCreationTimestamp="2025-10-14 09:56:28 +0000 UTC" firstStartedPulling="2025-10-14 09:56:30.323889335 +0000 UTC m=+11338.234973181" lastFinishedPulling="2025-10-14 09:56:32.873922205 +0000 UTC m=+11340.785006011" observedRunningTime="2025-10-14 09:56:33.423671186 +0000 UTC m=+11341.334755042" watchObservedRunningTime="2025-10-14 09:56:33.437432497 +0000 UTC m=+11341.348516343" Oct 14 09:56:33 crc kubenswrapper[5058]: I1014 09:56:33.656359 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:56:33 crc kubenswrapper[5058]: I1014 09:56:33.656420 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:56:39 crc kubenswrapper[5058]: I1014 09:56:39.112902 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:39 crc kubenswrapper[5058]: I1014 09:56:39.114490 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:39 crc kubenswrapper[5058]: I1014 09:56:39.198502 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:39 crc kubenswrapper[5058]: I1014 09:56:39.563425 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:39 crc kubenswrapper[5058]: I1014 09:56:39.639152 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4489r"] Oct 14 09:56:41 crc kubenswrapper[5058]: I1014 09:56:41.514210 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4489r" podUID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerName="registry-server" containerID="cri-o://5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74" gracePeriod=2 Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.058582 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.195936 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-catalog-content\") pod \"265449a0-86aa-4df5-a8e8-adb500fd1d39\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.196003 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rrqb\" (UniqueName: \"kubernetes.io/projected/265449a0-86aa-4df5-a8e8-adb500fd1d39-kube-api-access-5rrqb\") pod \"265449a0-86aa-4df5-a8e8-adb500fd1d39\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.196140 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-utilities\") pod \"265449a0-86aa-4df5-a8e8-adb500fd1d39\" (UID: \"265449a0-86aa-4df5-a8e8-adb500fd1d39\") " Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.197744 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-utilities" (OuterVolumeSpecName: "utilities") pod "265449a0-86aa-4df5-a8e8-adb500fd1d39" (UID: "265449a0-86aa-4df5-a8e8-adb500fd1d39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.206316 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265449a0-86aa-4df5-a8e8-adb500fd1d39-kube-api-access-5rrqb" (OuterVolumeSpecName: "kube-api-access-5rrqb") pod "265449a0-86aa-4df5-a8e8-adb500fd1d39" (UID: "265449a0-86aa-4df5-a8e8-adb500fd1d39"). InnerVolumeSpecName "kube-api-access-5rrqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.212398 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "265449a0-86aa-4df5-a8e8-adb500fd1d39" (UID: "265449a0-86aa-4df5-a8e8-adb500fd1d39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.298921 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.298971 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rrqb\" (UniqueName: \"kubernetes.io/projected/265449a0-86aa-4df5-a8e8-adb500fd1d39-kube-api-access-5rrqb\") on node \"crc\" DevicePath \"\"" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.298993 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/265449a0-86aa-4df5-a8e8-adb500fd1d39-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.528686 5058 generic.go:334] "Generic (PLEG): container finished" podID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerID="5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74" exitCode=0 Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.528747 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4489r" event={"ID":"265449a0-86aa-4df5-a8e8-adb500fd1d39","Type":"ContainerDied","Data":"5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74"} Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.528785 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4489r" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.528833 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4489r" event={"ID":"265449a0-86aa-4df5-a8e8-adb500fd1d39","Type":"ContainerDied","Data":"bc642746daf4cee91cef00b27c03dc214e36d463543410c4c69abe6b12ad7bd0"} Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.528887 5058 scope.go:117] "RemoveContainer" containerID="5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.559566 5058 scope.go:117] "RemoveContainer" containerID="2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.601191 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4489r"] Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.617724 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4489r"] Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.618514 5058 scope.go:117] "RemoveContainer" containerID="65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.666912 5058 scope.go:117] "RemoveContainer" containerID="5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74" Oct 14 09:56:42 crc kubenswrapper[5058]: E1014 09:56:42.667456 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74\": container with ID starting with 5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74 not found: ID does not exist" containerID="5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.667514 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74"} err="failed to get container status \"5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74\": rpc error: code = NotFound desc = could not find container \"5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74\": container with ID starting with 5623fdc150b165c74f298e6aac1869f36b628983a82052956f675c3d2d2eab74 not found: ID does not exist" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.667553 5058 scope.go:117] "RemoveContainer" containerID="2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8" Oct 14 09:56:42 crc kubenswrapper[5058]: E1014 09:56:42.668100 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8\": container with ID starting with 2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8 not found: ID does not exist" containerID="2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.668140 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8"} err="failed to get container status \"2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8\": rpc error: code = NotFound desc = could not find container \"2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8\": container with ID starting with 2a9bbf4d1f6c993bee157d22cf14c8ea2e4e5b18158c283e975cff77bc5fedd8 not found: ID does not exist" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.668170 5058 scope.go:117] "RemoveContainer" containerID="65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450" Oct 14 09:56:42 crc kubenswrapper[5058]: E1014 09:56:42.668590 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450\": container with ID starting with 65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450 not found: ID does not exist" containerID="65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.668634 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450"} err="failed to get container status \"65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450\": rpc error: code = NotFound desc = could not find container \"65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450\": container with ID starting with 65329b2c0ce772f7c62f8c46797b7b005804a30ccf8cd5fcc77920478bd1d450 not found: ID does not exist" Oct 14 09:56:42 crc kubenswrapper[5058]: I1014 09:56:42.803000 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265449a0-86aa-4df5-a8e8-adb500fd1d39" path="/var/lib/kubelet/pods/265449a0-86aa-4df5-a8e8-adb500fd1d39/volumes" Oct 14 09:56:54 crc kubenswrapper[5058]: I1014 09:56:54.268263 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Oct 14 09:56:54 crc kubenswrapper[5058]: I1014 09:56:54.269401 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="d59cf81a-539e-4db7-9c0f-f5e075fea478" containerName="adoption" containerID="cri-o://a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1" gracePeriod=30 Oct 14 09:57:03 crc kubenswrapper[5058]: I1014 09:57:03.656001 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 09:57:03 crc kubenswrapper[5058]: I1014 09:57:03.656746 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 09:57:03 crc kubenswrapper[5058]: I1014 09:57:03.656859 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 09:57:03 crc kubenswrapper[5058]: I1014 09:57:03.658438 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 09:57:03 crc kubenswrapper[5058]: I1014 09:57:03.658708 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" gracePeriod=600 Oct 14 09:57:03 crc kubenswrapper[5058]: E1014 09:57:03.797319 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:57:03 crc kubenswrapper[5058]: I1014 09:57:03.828854 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c"} Oct 14 09:57:03 crc kubenswrapper[5058]: I1014 09:57:03.828936 5058 scope.go:117] "RemoveContainer" containerID="0a2cedfce5628ab51803f44e76505a9365df9ed4bd2f29645beb563cf71925b7" Oct 14 09:57:03 crc kubenswrapper[5058]: I1014 09:57:03.828791 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" exitCode=0 Oct 14 09:57:03 crc kubenswrapper[5058]: I1014 09:57:03.830369 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:57:03 crc kubenswrapper[5058]: E1014 09:57:03.830924 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:57:17 crc kubenswrapper[5058]: I1014 09:57:17.791684 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:57:17 crc kubenswrapper[5058]: E1014 09:57:17.792960 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:57:24 crc kubenswrapper[5058]: I1014 09:57:24.805279 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 14 09:57:24 crc kubenswrapper[5058]: I1014 09:57:24.929312 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvkzt\" (UniqueName: \"kubernetes.io/projected/d59cf81a-539e-4db7-9c0f-f5e075fea478-kube-api-access-rvkzt\") pod \"d59cf81a-539e-4db7-9c0f-f5e075fea478\" (UID: \"d59cf81a-539e-4db7-9c0f-f5e075fea478\") " Oct 14 09:57:24 crc kubenswrapper[5058]: I1014 09:57:24.930013 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\") pod \"d59cf81a-539e-4db7-9c0f-f5e075fea478\" (UID: \"d59cf81a-539e-4db7-9c0f-f5e075fea478\") " Oct 14 09:57:24 crc kubenswrapper[5058]: I1014 09:57:24.941147 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59cf81a-539e-4db7-9c0f-f5e075fea478-kube-api-access-rvkzt" (OuterVolumeSpecName: "kube-api-access-rvkzt") pod "d59cf81a-539e-4db7-9c0f-f5e075fea478" (UID: "d59cf81a-539e-4db7-9c0f-f5e075fea478"). InnerVolumeSpecName "kube-api-access-rvkzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:24 crc kubenswrapper[5058]: I1014 09:57:24.963785 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733" (OuterVolumeSpecName: "mariadb-data") pod "d59cf81a-539e-4db7-9c0f-f5e075fea478" (UID: "d59cf81a-539e-4db7-9c0f-f5e075fea478"). InnerVolumeSpecName "pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.032393 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvkzt\" (UniqueName: \"kubernetes.io/projected/d59cf81a-539e-4db7-9c0f-f5e075fea478-kube-api-access-rvkzt\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.032477 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\") on node \"crc\" " Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.080822 5058 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.081075 5058 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733") on node "crc" Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.098292 5058 generic.go:334] "Generic (PLEG): container finished" podID="d59cf81a-539e-4db7-9c0f-f5e075fea478" containerID="a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1" exitCode=137 Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.098332 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d59cf81a-539e-4db7-9c0f-f5e075fea478","Type":"ContainerDied","Data":"a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1"} Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.098354 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d59cf81a-539e-4db7-9c0f-f5e075fea478","Type":"ContainerDied","Data":"071d5bf949f8bd6db65487946d2df257aa11be30695b047b4107089abc582c71"} Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.098353 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.098370 5058 scope.go:117] "RemoveContainer" containerID="a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1" Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.134616 5058 reconciler_common.go:293] "Volume detached for volume \"pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0458bf2d-f093-4d0c-a0f2-e395b6bad733\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.142154 5058 scope.go:117] "RemoveContainer" containerID="a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1" Oct 14 09:57:25 crc kubenswrapper[5058]: E1014 09:57:25.142742 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1\": container with ID starting with a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1 not found: ID does not exist" containerID="a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1" Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.142786 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1"} err="failed to get container status \"a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1\": rpc error: code = NotFound desc = could not find container \"a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1\": container with ID starting with a1ca76f328b4baa62ca0454dc94c7a12ec2bf9117c870d1b3729f743e7fe4ae1 not found: ID does not exist" Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.150290 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.167596 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.899236 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Oct 14 09:57:25 crc kubenswrapper[5058]: I1014 09:57:25.899821 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="1c026dd0-fcb2-4488-bb1a-39b3b72690c8" containerName="adoption" containerID="cri-o://7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f" gracePeriod=30 Oct 14 09:57:26 crc kubenswrapper[5058]: I1014 09:57:26.810903 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59cf81a-539e-4db7-9c0f-f5e075fea478" path="/var/lib/kubelet/pods/d59cf81a-539e-4db7-9c0f-f5e075fea478/volumes" Oct 14 09:57:30 crc kubenswrapper[5058]: I1014 09:57:30.789911 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:57:30 crc kubenswrapper[5058]: E1014 09:57:30.790604 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:57:43 crc kubenswrapper[5058]: I1014 09:57:43.790982 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:57:43 crc kubenswrapper[5058]: E1014 09:57:43.792270 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:57:54 crc kubenswrapper[5058]: I1014 09:57:54.790581 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:57:54 crc kubenswrapper[5058]: E1014 09:57:54.791624 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.499153 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.557139 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-ovn-data-cert\") pod \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.558261 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\") pod \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.558307 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp9r5\" (UniqueName: \"kubernetes.io/projected/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-kube-api-access-lp9r5\") pod \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\" (UID: \"1c026dd0-fcb2-4488-bb1a-39b3b72690c8\") " Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.573856 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-kube-api-access-lp9r5" (OuterVolumeSpecName: "kube-api-access-lp9r5") pod "1c026dd0-fcb2-4488-bb1a-39b3b72690c8" (UID: "1c026dd0-fcb2-4488-bb1a-39b3b72690c8"). InnerVolumeSpecName "kube-api-access-lp9r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.581694 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "1c026dd0-fcb2-4488-bb1a-39b3b72690c8" (UID: "1c026dd0-fcb2-4488-bb1a-39b3b72690c8"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.587685 5058 generic.go:334] "Generic (PLEG): container finished" podID="1c026dd0-fcb2-4488-bb1a-39b3b72690c8" containerID="7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f" exitCode=137 Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.587730 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"1c026dd0-fcb2-4488-bb1a-39b3b72690c8","Type":"ContainerDied","Data":"7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f"} Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.587756 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"1c026dd0-fcb2-4488-bb1a-39b3b72690c8","Type":"ContainerDied","Data":"3ba2a2fca25948da7ccfa93d2af8150a1ddae12bb2bac695216516eeb230d1ca"} Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.587774 5058 scope.go:117] "RemoveContainer" containerID="7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.587948 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.588375 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6" (OuterVolumeSpecName: "ovn-data") pod "1c026dd0-fcb2-4488-bb1a-39b3b72690c8" (UID: "1c026dd0-fcb2-4488-bb1a-39b3b72690c8"). InnerVolumeSpecName "pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.660608 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\") on node \"crc\" " Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.660653 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp9r5\" (UniqueName: \"kubernetes.io/projected/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-kube-api-access-lp9r5\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.660666 5058 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/1c026dd0-fcb2-4488-bb1a-39b3b72690c8-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.676278 5058 scope.go:117] "RemoveContainer" containerID="7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f" Oct 14 09:57:56 crc kubenswrapper[5058]: E1014 09:57:56.676749 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f\": container with ID starting with 7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f not found: ID does not exist" containerID="7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.676773 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f"} err="failed to get container status \"7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f\": rpc error: code = NotFound desc = could not find container \"7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f\": container with ID starting with 7508e79e8febf1a7cd134774e510ececb96e2cd350570b34ed98661a900c0c0f not found: ID does not exist" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.682974 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.694293 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.695851 5058 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.696059 5058 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6") on node "crc" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.762944 5058 reconciler_common.go:293] "Volume detached for volume \"pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d30c553c-b89d-4f72-ba15-b209aeb846e6\") on node \"crc\" DevicePath \"\"" Oct 14 09:57:56 crc kubenswrapper[5058]: I1014 09:57:56.808146 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c026dd0-fcb2-4488-bb1a-39b3b72690c8" path="/var/lib/kubelet/pods/1c026dd0-fcb2-4488-bb1a-39b3b72690c8/volumes" Oct 14 09:58:08 crc kubenswrapper[5058]: I1014 09:58:08.790285 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:58:08 crc kubenswrapper[5058]: E1014 09:58:08.792079 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:58:21 crc kubenswrapper[5058]: I1014 09:58:21.790496 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:58:21 crc kubenswrapper[5058]: E1014 09:58:21.791613 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.115270 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 09:58:23 crc kubenswrapper[5058]: E1014 09:58:23.116175 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c026dd0-fcb2-4488-bb1a-39b3b72690c8" containerName="adoption" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.116198 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c026dd0-fcb2-4488-bb1a-39b3b72690c8" containerName="adoption" Oct 14 09:58:23 crc kubenswrapper[5058]: E1014 09:58:23.116218 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59cf81a-539e-4db7-9c0f-f5e075fea478" containerName="adoption" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.116226 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59cf81a-539e-4db7-9c0f-f5e075fea478" containerName="adoption" Oct 14 09:58:23 crc kubenswrapper[5058]: E1014 09:58:23.116250 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerName="registry-server" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.116258 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerName="registry-server" Oct 14 09:58:23 crc kubenswrapper[5058]: E1014 09:58:23.116275 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerName="extract-utilities" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.116284 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerName="extract-utilities" Oct 14 09:58:23 crc kubenswrapper[5058]: E1014 09:58:23.116326 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerName="extract-content" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.116335 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerName="extract-content" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.116598 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c026dd0-fcb2-4488-bb1a-39b3b72690c8" containerName="adoption" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.116630 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59cf81a-539e-4db7-9c0f-f5e075fea478" containerName="adoption" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.116656 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="265449a0-86aa-4df5-a8e8-adb500fd1d39" containerName="registry-server" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.117658 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.122094 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.122885 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vjsll" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.123202 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.124490 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.137573 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.258543 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-config-data\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.258626 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.258840 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.258946 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.259056 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.259183 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpxlp\" (UniqueName: \"kubernetes.io/projected/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-kube-api-access-cpxlp\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.259278 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.259453 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.259487 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.361294 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpxlp\" (UniqueName: \"kubernetes.io/projected/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-kube-api-access-cpxlp\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.361590 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.361788 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.361946 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.362120 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-config-data\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.362277 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.362139 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.363567 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.363728 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.363859 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-config-data\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.362696 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.362445 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.364739 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.365484 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.370621 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.372528 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.373501 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.381936 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpxlp\" (UniqueName: \"kubernetes.io/projected/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-kube-api-access-cpxlp\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.410739 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.444261 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 09:58:23 crc kubenswrapper[5058]: I1014 09:58:23.971392 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 09:58:23 crc kubenswrapper[5058]: W1014 09:58:23.991097 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68b0860_bc9e_43b0_a2c5_4fd969d4149a.slice/crio-6044125e936bdfdd7358e455d1be0e42d4c0733150a7b5e8de7b87a2a4ce3507 WatchSource:0}: Error finding container 6044125e936bdfdd7358e455d1be0e42d4c0733150a7b5e8de7b87a2a4ce3507: Status 404 returned error can't find the container with id 6044125e936bdfdd7358e455d1be0e42d4c0733150a7b5e8de7b87a2a4ce3507 Oct 14 09:58:24 crc kubenswrapper[5058]: I1014 09:58:24.995296 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e68b0860-bc9e-43b0-a2c5-4fd969d4149a","Type":"ContainerStarted","Data":"6044125e936bdfdd7358e455d1be0e42d4c0733150a7b5e8de7b87a2a4ce3507"} Oct 14 09:58:35 crc kubenswrapper[5058]: I1014 09:58:35.791051 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:58:35 crc kubenswrapper[5058]: E1014 09:58:35.792038 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:58:49 crc kubenswrapper[5058]: I1014 09:58:49.789903 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:58:49 crc kubenswrapper[5058]: E1014 09:58:49.790643 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:59:04 crc kubenswrapper[5058]: I1014 09:59:04.791062 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:59:04 crc kubenswrapper[5058]: E1014 09:59:04.792235 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:59:11 crc kubenswrapper[5058]: E1014 09:59:11.503971 5058 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:0468cb21803d466b2abfe00835cf1d2d" Oct 14 09:59:11 crc kubenswrapper[5058]: E1014 09:59:11.504481 5058 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:0468cb21803d466b2abfe00835cf1d2d" Oct 14 09:59:11 crc kubenswrapper[5058]: E1014 09:59:11.504616 5058 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:0468cb21803d466b2abfe00835cf1d2d,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpxlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(e68b0860-bc9e-43b0-a2c5-4fd969d4149a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 09:59:11 crc kubenswrapper[5058]: E1014 09:59:11.505748 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="e68b0860-bc9e-43b0-a2c5-4fd969d4149a" Oct 14 09:59:11 crc kubenswrapper[5058]: E1014 09:59:11.584012 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:0468cb21803d466b2abfe00835cf1d2d\\\"\"" pod="openstack/tempest-tests-tempest" podUID="e68b0860-bc9e-43b0-a2c5-4fd969d4149a" Oct 14 09:59:18 crc kubenswrapper[5058]: I1014 09:59:18.790925 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:59:18 crc kubenswrapper[5058]: E1014 09:59:18.791730 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:59:24 crc kubenswrapper[5058]: I1014 09:59:24.052331 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 14 09:59:25 crc kubenswrapper[5058]: I1014 09:59:25.773607 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e68b0860-bc9e-43b0-a2c5-4fd969d4149a","Type":"ContainerStarted","Data":"5f67b61b399cc68fe4dbd744b87fc987210a6de0e4326aff71aa58be3d350889"} Oct 14 09:59:25 crc kubenswrapper[5058]: I1014 09:59:25.805020 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.753708433 podStartE2EDuration="1m3.804995755s" podCreationTimestamp="2025-10-14 09:58:22 +0000 UTC" firstStartedPulling="2025-10-14 09:58:23.997106717 +0000 UTC m=+11451.908190533" lastFinishedPulling="2025-10-14 09:59:24.048394009 +0000 UTC m=+11511.959477855" observedRunningTime="2025-10-14 09:59:25.798891562 +0000 UTC m=+11513.709975428" watchObservedRunningTime="2025-10-14 09:59:25.804995755 +0000 UTC m=+11513.716079591" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.401006 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tw9kb"] Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.405835 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.415296 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tw9kb"] Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.490269 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvldp\" (UniqueName: \"kubernetes.io/projected/9e20e389-dfea-4516-a8e4-4609eac44610-kube-api-access-nvldp\") pod \"certified-operators-tw9kb\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.490679 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-catalog-content\") pod \"certified-operators-tw9kb\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.490856 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-utilities\") pod \"certified-operators-tw9kb\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.593020 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvldp\" (UniqueName: \"kubernetes.io/projected/9e20e389-dfea-4516-a8e4-4609eac44610-kube-api-access-nvldp\") pod \"certified-operators-tw9kb\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.593186 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-catalog-content\") pod \"certified-operators-tw9kb\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.593271 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-utilities\") pod \"certified-operators-tw9kb\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.593683 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-catalog-content\") pod \"certified-operators-tw9kb\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.593834 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-utilities\") pod \"certified-operators-tw9kb\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.615262 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvldp\" (UniqueName: \"kubernetes.io/projected/9e20e389-dfea-4516-a8e4-4609eac44610-kube-api-access-nvldp\") pod \"certified-operators-tw9kb\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.735640 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:29 crc kubenswrapper[5058]: I1014 09:59:29.790514 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:59:29 crc kubenswrapper[5058]: E1014 09:59:29.791109 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:59:30 crc kubenswrapper[5058]: I1014 09:59:30.359312 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tw9kb"] Oct 14 09:59:30 crc kubenswrapper[5058]: I1014 09:59:30.858599 5058 generic.go:334] "Generic (PLEG): container finished" podID="9e20e389-dfea-4516-a8e4-4609eac44610" containerID="fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1" exitCode=0 Oct 14 09:59:30 crc kubenswrapper[5058]: I1014 09:59:30.858660 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw9kb" event={"ID":"9e20e389-dfea-4516-a8e4-4609eac44610","Type":"ContainerDied","Data":"fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1"} Oct 14 09:59:30 crc kubenswrapper[5058]: I1014 09:59:30.858695 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw9kb" event={"ID":"9e20e389-dfea-4516-a8e4-4609eac44610","Type":"ContainerStarted","Data":"dfd115bf2c95edb1120814811a9c5da150342902d7f56a580cbbc7967d460c6a"} Oct 14 09:59:33 crc kubenswrapper[5058]: I1014 09:59:33.925184 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw9kb" event={"ID":"9e20e389-dfea-4516-a8e4-4609eac44610","Type":"ContainerStarted","Data":"9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3"} Oct 14 09:59:37 crc kubenswrapper[5058]: I1014 09:59:37.981084 5058 generic.go:334] "Generic (PLEG): container finished" podID="9e20e389-dfea-4516-a8e4-4609eac44610" containerID="9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3" exitCode=0 Oct 14 09:59:37 crc kubenswrapper[5058]: I1014 09:59:37.981145 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw9kb" event={"ID":"9e20e389-dfea-4516-a8e4-4609eac44610","Type":"ContainerDied","Data":"9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3"} Oct 14 09:59:40 crc kubenswrapper[5058]: I1014 09:59:40.004652 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw9kb" event={"ID":"9e20e389-dfea-4516-a8e4-4609eac44610","Type":"ContainerStarted","Data":"f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249"} Oct 14 09:59:40 crc kubenswrapper[5058]: I1014 09:59:40.031266 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tw9kb" podStartSLOduration=3.903917064 podStartE2EDuration="11.031248601s" podCreationTimestamp="2025-10-14 09:59:29 +0000 UTC" firstStartedPulling="2025-10-14 09:59:31.893441785 +0000 UTC m=+11519.804525591" lastFinishedPulling="2025-10-14 09:59:39.020773322 +0000 UTC m=+11526.931857128" observedRunningTime="2025-10-14 09:59:40.030257312 +0000 UTC m=+11527.941341138" watchObservedRunningTime="2025-10-14 09:59:40.031248601 +0000 UTC m=+11527.942332407" Oct 14 09:59:43 crc kubenswrapper[5058]: I1014 09:59:43.791040 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:59:43 crc kubenswrapper[5058]: E1014 09:59:43.791983 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:59:49 crc kubenswrapper[5058]: I1014 09:59:49.736482 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:49 crc kubenswrapper[5058]: I1014 09:59:49.737158 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:49 crc kubenswrapper[5058]: I1014 09:59:49.785929 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:50 crc kubenswrapper[5058]: I1014 09:59:50.168335 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:50 crc kubenswrapper[5058]: I1014 09:59:50.233751 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tw9kb"] Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.140245 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tw9kb" podUID="9e20e389-dfea-4516-a8e4-4609eac44610" containerName="registry-server" containerID="cri-o://f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249" gracePeriod=2 Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.714197 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.796807 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-catalog-content\") pod \"9e20e389-dfea-4516-a8e4-4609eac44610\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.796915 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvldp\" (UniqueName: \"kubernetes.io/projected/9e20e389-dfea-4516-a8e4-4609eac44610-kube-api-access-nvldp\") pod \"9e20e389-dfea-4516-a8e4-4609eac44610\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.797038 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-utilities\") pod \"9e20e389-dfea-4516-a8e4-4609eac44610\" (UID: \"9e20e389-dfea-4516-a8e4-4609eac44610\") " Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.806087 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e20e389-dfea-4516-a8e4-4609eac44610-kube-api-access-nvldp" (OuterVolumeSpecName: "kube-api-access-nvldp") pod "9e20e389-dfea-4516-a8e4-4609eac44610" (UID: "9e20e389-dfea-4516-a8e4-4609eac44610"). InnerVolumeSpecName "kube-api-access-nvldp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.830473 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-utilities" (OuterVolumeSpecName: "utilities") pod "9e20e389-dfea-4516-a8e4-4609eac44610" (UID: "9e20e389-dfea-4516-a8e4-4609eac44610"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.886498 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e20e389-dfea-4516-a8e4-4609eac44610" (UID: "9e20e389-dfea-4516-a8e4-4609eac44610"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.900403 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvldp\" (UniqueName: \"kubernetes.io/projected/9e20e389-dfea-4516-a8e4-4609eac44610-kube-api-access-nvldp\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.900459 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:52 crc kubenswrapper[5058]: I1014 09:59:52.900477 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e20e389-dfea-4516-a8e4-4609eac44610-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.158908 5058 generic.go:334] "Generic (PLEG): container finished" podID="9e20e389-dfea-4516-a8e4-4609eac44610" containerID="f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249" exitCode=0 Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.158992 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw9kb" event={"ID":"9e20e389-dfea-4516-a8e4-4609eac44610","Type":"ContainerDied","Data":"f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249"} Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.159042 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tw9kb" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.159070 5058 scope.go:117] "RemoveContainer" containerID="f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.159052 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw9kb" event={"ID":"9e20e389-dfea-4516-a8e4-4609eac44610","Type":"ContainerDied","Data":"dfd115bf2c95edb1120814811a9c5da150342902d7f56a580cbbc7967d460c6a"} Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.209560 5058 scope.go:117] "RemoveContainer" containerID="9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.230548 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tw9kb"] Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.244525 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tw9kb"] Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.248545 5058 scope.go:117] "RemoveContainer" containerID="fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.296715 5058 scope.go:117] "RemoveContainer" containerID="f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249" Oct 14 09:59:53 crc kubenswrapper[5058]: E1014 09:59:53.297702 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249\": container with ID starting with f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249 not found: ID does not exist" containerID="f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.297755 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249"} err="failed to get container status \"f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249\": rpc error: code = NotFound desc = could not find container \"f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249\": container with ID starting with f0efa5b01565215b6ba0f85ecb94789d4ccc27002d4df7313aaabe373e34e249 not found: ID does not exist" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.297781 5058 scope.go:117] "RemoveContainer" containerID="9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3" Oct 14 09:59:53 crc kubenswrapper[5058]: E1014 09:59:53.298198 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3\": container with ID starting with 9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3 not found: ID does not exist" containerID="9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.298270 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3"} err="failed to get container status \"9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3\": rpc error: code = NotFound desc = could not find container \"9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3\": container with ID starting with 9ef02de836a0d6ba1676a9746688cbb67085ccd6c8ca56b523f0439fd3fdbee3 not found: ID does not exist" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.298314 5058 scope.go:117] "RemoveContainer" containerID="fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1" Oct 14 09:59:53 crc kubenswrapper[5058]: E1014 09:59:53.298684 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1\": container with ID starting with fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1 not found: ID does not exist" containerID="fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1" Oct 14 09:59:53 crc kubenswrapper[5058]: I1014 09:59:53.298731 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1"} err="failed to get container status \"fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1\": rpc error: code = NotFound desc = could not find container \"fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1\": container with ID starting with fd02c61391d4fac51dd8daab1939706c37e16ff151161fbf0702b4d5eda2cff1 not found: ID does not exist" Oct 14 09:59:54 crc kubenswrapper[5058]: I1014 09:59:54.790703 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 09:59:54 crc kubenswrapper[5058]: E1014 09:59:54.791074 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 09:59:54 crc kubenswrapper[5058]: I1014 09:59:54.808595 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e20e389-dfea-4516-a8e4-4609eac44610" path="/var/lib/kubelet/pods/9e20e389-dfea-4516-a8e4-4609eac44610/volumes" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.155726 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf"] Oct 14 10:00:00 crc kubenswrapper[5058]: E1014 10:00:00.166362 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e20e389-dfea-4516-a8e4-4609eac44610" containerName="extract-utilities" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.166389 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e20e389-dfea-4516-a8e4-4609eac44610" containerName="extract-utilities" Oct 14 10:00:00 crc kubenswrapper[5058]: E1014 10:00:00.166433 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e20e389-dfea-4516-a8e4-4609eac44610" containerName="extract-content" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.166441 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e20e389-dfea-4516-a8e4-4609eac44610" containerName="extract-content" Oct 14 10:00:00 crc kubenswrapper[5058]: E1014 10:00:00.166454 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e20e389-dfea-4516-a8e4-4609eac44610" containerName="registry-server" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.166460 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e20e389-dfea-4516-a8e4-4609eac44610" containerName="registry-server" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.166678 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e20e389-dfea-4516-a8e4-4609eac44610" containerName="registry-server" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.178682 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.179499 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf"] Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.183020 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.183367 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.270857 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/842741f6-1ca3-4c19-9c3c-82e141baf04c-config-volume\") pod \"collect-profiles-29340600-87dwf\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.270973 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/842741f6-1ca3-4c19-9c3c-82e141baf04c-secret-volume\") pod \"collect-profiles-29340600-87dwf\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.271015 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkfv\" (UniqueName: \"kubernetes.io/projected/842741f6-1ca3-4c19-9c3c-82e141baf04c-kube-api-access-xbkfv\") pod \"collect-profiles-29340600-87dwf\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.375484 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/842741f6-1ca3-4c19-9c3c-82e141baf04c-secret-volume\") pod \"collect-profiles-29340600-87dwf\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.375569 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbkfv\" (UniqueName: \"kubernetes.io/projected/842741f6-1ca3-4c19-9c3c-82e141baf04c-kube-api-access-xbkfv\") pod \"collect-profiles-29340600-87dwf\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.375721 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/842741f6-1ca3-4c19-9c3c-82e141baf04c-config-volume\") pod \"collect-profiles-29340600-87dwf\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.377109 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/842741f6-1ca3-4c19-9c3c-82e141baf04c-config-volume\") pod \"collect-profiles-29340600-87dwf\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.381700 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/842741f6-1ca3-4c19-9c3c-82e141baf04c-secret-volume\") pod \"collect-profiles-29340600-87dwf\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.391604 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbkfv\" (UniqueName: \"kubernetes.io/projected/842741f6-1ca3-4c19-9c3c-82e141baf04c-kube-api-access-xbkfv\") pod \"collect-profiles-29340600-87dwf\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:00 crc kubenswrapper[5058]: I1014 10:00:00.511503 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:01 crc kubenswrapper[5058]: I1014 10:00:01.017832 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf"] Oct 14 10:00:01 crc kubenswrapper[5058]: I1014 10:00:01.257601 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" event={"ID":"842741f6-1ca3-4c19-9c3c-82e141baf04c","Type":"ContainerStarted","Data":"b492a578bc237158575b766693981caa3251cceb482ef7f0b072db33cd7d7ba9"} Oct 14 10:00:02 crc kubenswrapper[5058]: I1014 10:00:02.273921 5058 generic.go:334] "Generic (PLEG): container finished" podID="842741f6-1ca3-4c19-9c3c-82e141baf04c" containerID="eeb7ade479f354cccb892d8f29034e8175222ba379d57f8e9ab82f2f1ead0fc3" exitCode=0 Oct 14 10:00:02 crc kubenswrapper[5058]: I1014 10:00:02.273992 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" event={"ID":"842741f6-1ca3-4c19-9c3c-82e141baf04c","Type":"ContainerDied","Data":"eeb7ade479f354cccb892d8f29034e8175222ba379d57f8e9ab82f2f1ead0fc3"} Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.787680 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.856877 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/842741f6-1ca3-4c19-9c3c-82e141baf04c-config-volume\") pod \"842741f6-1ca3-4c19-9c3c-82e141baf04c\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.857096 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/842741f6-1ca3-4c19-9c3c-82e141baf04c-secret-volume\") pod \"842741f6-1ca3-4c19-9c3c-82e141baf04c\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.857226 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbkfv\" (UniqueName: \"kubernetes.io/projected/842741f6-1ca3-4c19-9c3c-82e141baf04c-kube-api-access-xbkfv\") pod \"842741f6-1ca3-4c19-9c3c-82e141baf04c\" (UID: \"842741f6-1ca3-4c19-9c3c-82e141baf04c\") " Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.859919 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/842741f6-1ca3-4c19-9c3c-82e141baf04c-config-volume" (OuterVolumeSpecName: "config-volume") pod "842741f6-1ca3-4c19-9c3c-82e141baf04c" (UID: "842741f6-1ca3-4c19-9c3c-82e141baf04c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.863937 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/842741f6-1ca3-4c19-9c3c-82e141baf04c-kube-api-access-xbkfv" (OuterVolumeSpecName: "kube-api-access-xbkfv") pod "842741f6-1ca3-4c19-9c3c-82e141baf04c" (UID: "842741f6-1ca3-4c19-9c3c-82e141baf04c"). InnerVolumeSpecName "kube-api-access-xbkfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.863959 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842741f6-1ca3-4c19-9c3c-82e141baf04c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "842741f6-1ca3-4c19-9c3c-82e141baf04c" (UID: "842741f6-1ca3-4c19-9c3c-82e141baf04c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.959710 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/842741f6-1ca3-4c19-9c3c-82e141baf04c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.959748 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbkfv\" (UniqueName: \"kubernetes.io/projected/842741f6-1ca3-4c19-9c3c-82e141baf04c-kube-api-access-xbkfv\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:03 crc kubenswrapper[5058]: I1014 10:00:03.959761 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/842741f6-1ca3-4c19-9c3c-82e141baf04c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:00:04 crc kubenswrapper[5058]: I1014 10:00:04.298955 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" event={"ID":"842741f6-1ca3-4c19-9c3c-82e141baf04c","Type":"ContainerDied","Data":"b492a578bc237158575b766693981caa3251cceb482ef7f0b072db33cd7d7ba9"} Oct 14 10:00:04 crc kubenswrapper[5058]: I1014 10:00:04.299450 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b492a578bc237158575b766693981caa3251cceb482ef7f0b072db33cd7d7ba9" Oct 14 10:00:04 crc kubenswrapper[5058]: I1014 10:00:04.299028 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340600-87dwf" Oct 14 10:00:04 crc kubenswrapper[5058]: I1014 10:00:04.859241 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6"] Oct 14 10:00:04 crc kubenswrapper[5058]: I1014 10:00:04.867652 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340555-h6dv6"] Oct 14 10:00:06 crc kubenswrapper[5058]: I1014 10:00:06.790373 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:00:06 crc kubenswrapper[5058]: E1014 10:00:06.790963 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:00:06 crc kubenswrapper[5058]: I1014 10:00:06.808309 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96ba45f1-d2f5-4b78-8936-aed215eea0ef" path="/var/lib/kubelet/pods/96ba45f1-d2f5-4b78-8936-aed215eea0ef/volumes" Oct 14 10:00:11 crc kubenswrapper[5058]: I1014 10:00:11.445864 5058 scope.go:117] "RemoveContainer" containerID="6fcb5718c38d1bfedf17c258d9730cb2a4206613974fe13989fbcf855618755b" Oct 14 10:00:17 crc kubenswrapper[5058]: I1014 10:00:17.790003 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:00:17 crc kubenswrapper[5058]: E1014 10:00:17.790703 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:00:32 crc kubenswrapper[5058]: I1014 10:00:32.802536 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:00:32 crc kubenswrapper[5058]: E1014 10:00:32.803858 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:00:46 crc kubenswrapper[5058]: I1014 10:00:46.790624 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:00:46 crc kubenswrapper[5058]: E1014 10:00:46.793478 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.149664 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29340601-dzqtn"] Oct 14 10:01:00 crc kubenswrapper[5058]: E1014 10:01:00.150700 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842741f6-1ca3-4c19-9c3c-82e141baf04c" containerName="collect-profiles" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.150714 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="842741f6-1ca3-4c19-9c3c-82e141baf04c" containerName="collect-profiles" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.151066 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="842741f6-1ca3-4c19-9c3c-82e141baf04c" containerName="collect-profiles" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.151810 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.169382 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340601-dzqtn"] Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.248191 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2j5q\" (UniqueName: \"kubernetes.io/projected/1b1c6644-e9a1-4cbd-8095-debc60135fe5-kube-api-access-h2j5q\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.248253 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-combined-ca-bundle\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.248341 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-config-data\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.248374 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-fernet-keys\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.351330 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-fernet-keys\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.351496 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2j5q\" (UniqueName: \"kubernetes.io/projected/1b1c6644-e9a1-4cbd-8095-debc60135fe5-kube-api-access-h2j5q\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.351548 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-combined-ca-bundle\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.351637 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-config-data\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.362495 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-combined-ca-bundle\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.362961 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-config-data\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.363537 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-fernet-keys\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.375763 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2j5q\" (UniqueName: \"kubernetes.io/projected/1b1c6644-e9a1-4cbd-8095-debc60135fe5-kube-api-access-h2j5q\") pod \"keystone-cron-29340601-dzqtn\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.484384 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:00 crc kubenswrapper[5058]: I1014 10:01:00.790181 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:01:00 crc kubenswrapper[5058]: E1014 10:01:00.790839 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:01:01 crc kubenswrapper[5058]: I1014 10:01:01.133836 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340601-dzqtn"] Oct 14 10:01:01 crc kubenswrapper[5058]: I1014 10:01:01.928724 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340601-dzqtn" event={"ID":"1b1c6644-e9a1-4cbd-8095-debc60135fe5","Type":"ContainerStarted","Data":"3b377db667234722e6283d4d05966ad1914aa1ba857e7c966db0643a713d514f"} Oct 14 10:01:01 crc kubenswrapper[5058]: I1014 10:01:01.929337 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340601-dzqtn" event={"ID":"1b1c6644-e9a1-4cbd-8095-debc60135fe5","Type":"ContainerStarted","Data":"a5ef20af6eb363849a3238ad6b1e99c6eaf9bd768593e031ca9c571d1c5ed1d4"} Oct 14 10:01:01 crc kubenswrapper[5058]: I1014 10:01:01.950094 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29340601-dzqtn" podStartSLOduration=1.950077695 podStartE2EDuration="1.950077695s" podCreationTimestamp="2025-10-14 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:01:01.944555468 +0000 UTC m=+11609.855639274" watchObservedRunningTime="2025-10-14 10:01:01.950077695 +0000 UTC m=+11609.861161501" Oct 14 10:01:04 crc kubenswrapper[5058]: I1014 10:01:04.969918 5058 generic.go:334] "Generic (PLEG): container finished" podID="1b1c6644-e9a1-4cbd-8095-debc60135fe5" containerID="3b377db667234722e6283d4d05966ad1914aa1ba857e7c966db0643a713d514f" exitCode=0 Oct 14 10:01:04 crc kubenswrapper[5058]: I1014 10:01:04.970027 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340601-dzqtn" event={"ID":"1b1c6644-e9a1-4cbd-8095-debc60135fe5","Type":"ContainerDied","Data":"3b377db667234722e6283d4d05966ad1914aa1ba857e7c966db0643a713d514f"} Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.485479 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.578523 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-combined-ca-bundle\") pod \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.579267 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-config-data\") pod \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.579304 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2j5q\" (UniqueName: \"kubernetes.io/projected/1b1c6644-e9a1-4cbd-8095-debc60135fe5-kube-api-access-h2j5q\") pod \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.579425 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-fernet-keys\") pod \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\" (UID: \"1b1c6644-e9a1-4cbd-8095-debc60135fe5\") " Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.597499 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1b1c6644-e9a1-4cbd-8095-debc60135fe5" (UID: "1b1c6644-e9a1-4cbd-8095-debc60135fe5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.597939 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1c6644-e9a1-4cbd-8095-debc60135fe5-kube-api-access-h2j5q" (OuterVolumeSpecName: "kube-api-access-h2j5q") pod "1b1c6644-e9a1-4cbd-8095-debc60135fe5" (UID: "1b1c6644-e9a1-4cbd-8095-debc60135fe5"). InnerVolumeSpecName "kube-api-access-h2j5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.639886 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-config-data" (OuterVolumeSpecName: "config-data") pod "1b1c6644-e9a1-4cbd-8095-debc60135fe5" (UID: "1b1c6644-e9a1-4cbd-8095-debc60135fe5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.682436 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2j5q\" (UniqueName: \"kubernetes.io/projected/1b1c6644-e9a1-4cbd-8095-debc60135fe5-kube-api-access-h2j5q\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.682475 5058 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.682488 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.694109 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b1c6644-e9a1-4cbd-8095-debc60135fe5" (UID: "1b1c6644-e9a1-4cbd-8095-debc60135fe5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.786149 5058 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1c6644-e9a1-4cbd-8095-debc60135fe5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.989886 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340601-dzqtn" event={"ID":"1b1c6644-e9a1-4cbd-8095-debc60135fe5","Type":"ContainerDied","Data":"a5ef20af6eb363849a3238ad6b1e99c6eaf9bd768593e031ca9c571d1c5ed1d4"} Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.989922 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ef20af6eb363849a3238ad6b1e99c6eaf9bd768593e031ca9c571d1c5ed1d4" Oct 14 10:01:06 crc kubenswrapper[5058]: I1014 10:01:06.989935 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340601-dzqtn" Oct 14 10:01:14 crc kubenswrapper[5058]: I1014 10:01:14.790755 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:01:14 crc kubenswrapper[5058]: E1014 10:01:14.791664 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:01:29 crc kubenswrapper[5058]: I1014 10:01:29.790468 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:01:29 crc kubenswrapper[5058]: E1014 10:01:29.795340 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:01:42 crc kubenswrapper[5058]: I1014 10:01:42.799179 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:01:42 crc kubenswrapper[5058]: E1014 10:01:42.800116 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:01:55 crc kubenswrapper[5058]: I1014 10:01:55.789928 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:01:55 crc kubenswrapper[5058]: E1014 10:01:55.792167 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:02:09 crc kubenswrapper[5058]: I1014 10:02:09.791071 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:02:10 crc kubenswrapper[5058]: I1014 10:02:10.702636 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"a11bbc342d4620eecf266d40e16f0336b8ed6ae65ff1d9134eddf56353c7f16f"} Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.442588 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mqghf"] Oct 14 10:03:06 crc kubenswrapper[5058]: E1014 10:03:06.443752 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1c6644-e9a1-4cbd-8095-debc60135fe5" containerName="keystone-cron" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.443769 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1c6644-e9a1-4cbd-8095-debc60135fe5" containerName="keystone-cron" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.444012 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b1c6644-e9a1-4cbd-8095-debc60135fe5" containerName="keystone-cron" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.445636 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.464086 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqghf"] Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.539236 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-catalog-content\") pod \"community-operators-mqghf\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.539317 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smt5w\" (UniqueName: \"kubernetes.io/projected/3364c551-c5d2-4e92-b39a-8c994d81f2ce-kube-api-access-smt5w\") pod \"community-operators-mqghf\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.539520 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-utilities\") pod \"community-operators-mqghf\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.641604 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-utilities\") pod \"community-operators-mqghf\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.641686 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-catalog-content\") pod \"community-operators-mqghf\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.641724 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smt5w\" (UniqueName: \"kubernetes.io/projected/3364c551-c5d2-4e92-b39a-8c994d81f2ce-kube-api-access-smt5w\") pod \"community-operators-mqghf\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.642204 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-utilities\") pod \"community-operators-mqghf\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.642286 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-catalog-content\") pod \"community-operators-mqghf\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.674196 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smt5w\" (UniqueName: \"kubernetes.io/projected/3364c551-c5d2-4e92-b39a-8c994d81f2ce-kube-api-access-smt5w\") pod \"community-operators-mqghf\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:06 crc kubenswrapper[5058]: I1014 10:03:06.783290 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:07 crc kubenswrapper[5058]: I1014 10:03:07.395414 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqghf"] Oct 14 10:03:08 crc kubenswrapper[5058]: I1014 10:03:08.329265 5058 generic.go:334] "Generic (PLEG): container finished" podID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerID="5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2" exitCode=0 Oct 14 10:03:08 crc kubenswrapper[5058]: I1014 10:03:08.329366 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqghf" event={"ID":"3364c551-c5d2-4e92-b39a-8c994d81f2ce","Type":"ContainerDied","Data":"5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2"} Oct 14 10:03:08 crc kubenswrapper[5058]: I1014 10:03:08.329670 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqghf" event={"ID":"3364c551-c5d2-4e92-b39a-8c994d81f2ce","Type":"ContainerStarted","Data":"79bbe818c6a87014541241a227db7398223191840dbbf1614aabc14341392d15"} Oct 14 10:03:08 crc kubenswrapper[5058]: I1014 10:03:08.332040 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:03:11 crc kubenswrapper[5058]: I1014 10:03:11.362865 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqghf" event={"ID":"3364c551-c5d2-4e92-b39a-8c994d81f2ce","Type":"ContainerStarted","Data":"feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff"} Oct 14 10:03:12 crc kubenswrapper[5058]: I1014 10:03:12.374381 5058 generic.go:334] "Generic (PLEG): container finished" podID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerID="feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff" exitCode=0 Oct 14 10:03:12 crc kubenswrapper[5058]: I1014 10:03:12.374423 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqghf" event={"ID":"3364c551-c5d2-4e92-b39a-8c994d81f2ce","Type":"ContainerDied","Data":"feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff"} Oct 14 10:03:13 crc kubenswrapper[5058]: I1014 10:03:13.389362 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqghf" event={"ID":"3364c551-c5d2-4e92-b39a-8c994d81f2ce","Type":"ContainerStarted","Data":"d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86"} Oct 14 10:03:13 crc kubenswrapper[5058]: I1014 10:03:13.415228 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mqghf" podStartSLOduration=2.912131944 podStartE2EDuration="7.415211329s" podCreationTimestamp="2025-10-14 10:03:06 +0000 UTC" firstStartedPulling="2025-10-14 10:03:08.331756772 +0000 UTC m=+11736.242840588" lastFinishedPulling="2025-10-14 10:03:12.834836167 +0000 UTC m=+11740.745919973" observedRunningTime="2025-10-14 10:03:13.410133995 +0000 UTC m=+11741.321217821" watchObservedRunningTime="2025-10-14 10:03:13.415211329 +0000 UTC m=+11741.326295125" Oct 14 10:03:16 crc kubenswrapper[5058]: I1014 10:03:16.786062 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:16 crc kubenswrapper[5058]: I1014 10:03:16.786617 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:16 crc kubenswrapper[5058]: I1014 10:03:16.862035 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:26 crc kubenswrapper[5058]: I1014 10:03:26.861720 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:26 crc kubenswrapper[5058]: I1014 10:03:26.933549 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqghf"] Oct 14 10:03:27 crc kubenswrapper[5058]: I1014 10:03:27.550581 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mqghf" podUID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerName="registry-server" containerID="cri-o://d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86" gracePeriod=2 Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.168306 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.272728 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-utilities\") pod \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.273128 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-catalog-content\") pod \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.273373 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smt5w\" (UniqueName: \"kubernetes.io/projected/3364c551-c5d2-4e92-b39a-8c994d81f2ce-kube-api-access-smt5w\") pod \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\" (UID: \"3364c551-c5d2-4e92-b39a-8c994d81f2ce\") " Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.273592 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-utilities" (OuterVolumeSpecName: "utilities") pod "3364c551-c5d2-4e92-b39a-8c994d81f2ce" (UID: "3364c551-c5d2-4e92-b39a-8c994d81f2ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.274717 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.279086 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3364c551-c5d2-4e92-b39a-8c994d81f2ce-kube-api-access-smt5w" (OuterVolumeSpecName: "kube-api-access-smt5w") pod "3364c551-c5d2-4e92-b39a-8c994d81f2ce" (UID: "3364c551-c5d2-4e92-b39a-8c994d81f2ce"). InnerVolumeSpecName "kube-api-access-smt5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.332310 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3364c551-c5d2-4e92-b39a-8c994d81f2ce" (UID: "3364c551-c5d2-4e92-b39a-8c994d81f2ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.376856 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3364c551-c5d2-4e92-b39a-8c994d81f2ce-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.376889 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smt5w\" (UniqueName: \"kubernetes.io/projected/3364c551-c5d2-4e92-b39a-8c994d81f2ce-kube-api-access-smt5w\") on node \"crc\" DevicePath \"\"" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.561179 5058 generic.go:334] "Generic (PLEG): container finished" podID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerID="d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86" exitCode=0 Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.561217 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqghf" event={"ID":"3364c551-c5d2-4e92-b39a-8c994d81f2ce","Type":"ContainerDied","Data":"d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86"} Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.561241 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqghf" event={"ID":"3364c551-c5d2-4e92-b39a-8c994d81f2ce","Type":"ContainerDied","Data":"79bbe818c6a87014541241a227db7398223191840dbbf1614aabc14341392d15"} Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.561260 5058 scope.go:117] "RemoveContainer" containerID="d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.561392 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqghf" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.595993 5058 scope.go:117] "RemoveContainer" containerID="feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.618577 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqghf"] Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.635882 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mqghf"] Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.638466 5058 scope.go:117] "RemoveContainer" containerID="5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.690181 5058 scope.go:117] "RemoveContainer" containerID="d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86" Oct 14 10:03:28 crc kubenswrapper[5058]: E1014 10:03:28.690520 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86\": container with ID starting with d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86 not found: ID does not exist" containerID="d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.690550 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86"} err="failed to get container status \"d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86\": rpc error: code = NotFound desc = could not find container \"d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86\": container with ID starting with d495eaf755a3583143fe10f3c746aaf40ac9fe594377750a0f73c11354d74a86 not found: ID does not exist" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.690568 5058 scope.go:117] "RemoveContainer" containerID="feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff" Oct 14 10:03:28 crc kubenswrapper[5058]: E1014 10:03:28.690961 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff\": container with ID starting with feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff not found: ID does not exist" containerID="feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.691009 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff"} err="failed to get container status \"feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff\": rpc error: code = NotFound desc = could not find container \"feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff\": container with ID starting with feea1bad169fdecceefe135464164688edc17e3fb8388d158ca8d12f416edbff not found: ID does not exist" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.691034 5058 scope.go:117] "RemoveContainer" containerID="5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2" Oct 14 10:03:28 crc kubenswrapper[5058]: E1014 10:03:28.691379 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2\": container with ID starting with 5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2 not found: ID does not exist" containerID="5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.691406 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2"} err="failed to get container status \"5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2\": rpc error: code = NotFound desc = could not find container \"5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2\": container with ID starting with 5fd78c9293fa04ded6915f1933d89bc3b4e5a0c377cc4bad09254dfb2aefcec2 not found: ID does not exist" Oct 14 10:03:28 crc kubenswrapper[5058]: I1014 10:03:28.805170 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" path="/var/lib/kubelet/pods/3364c551-c5d2-4e92-b39a-8c994d81f2ce/volumes" Oct 14 10:04:33 crc kubenswrapper[5058]: I1014 10:04:33.656720 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:04:33 crc kubenswrapper[5058]: I1014 10:04:33.657225 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:05:03 crc kubenswrapper[5058]: I1014 10:05:03.656429 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:05:03 crc kubenswrapper[5058]: I1014 10:05:03.657076 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:05:33 crc kubenswrapper[5058]: I1014 10:05:33.656103 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:05:33 crc kubenswrapper[5058]: I1014 10:05:33.656940 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:05:33 crc kubenswrapper[5058]: I1014 10:05:33.656995 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 10:05:33 crc kubenswrapper[5058]: I1014 10:05:33.658905 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a11bbc342d4620eecf266d40e16f0336b8ed6ae65ff1d9134eddf56353c7f16f"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:05:33 crc kubenswrapper[5058]: I1014 10:05:33.658996 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://a11bbc342d4620eecf266d40e16f0336b8ed6ae65ff1d9134eddf56353c7f16f" gracePeriod=600 Oct 14 10:05:34 crc kubenswrapper[5058]: I1014 10:05:34.015121 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="a11bbc342d4620eecf266d40e16f0336b8ed6ae65ff1d9134eddf56353c7f16f" exitCode=0 Oct 14 10:05:34 crc kubenswrapper[5058]: I1014 10:05:34.015380 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"a11bbc342d4620eecf266d40e16f0336b8ed6ae65ff1d9134eddf56353c7f16f"} Oct 14 10:05:34 crc kubenswrapper[5058]: I1014 10:05:34.015548 5058 scope.go:117] "RemoveContainer" containerID="abc214f4157b8c758fc2233893742db3556fc8dd999e8491ee2b71b4a2e6cf1c" Oct 14 10:05:35 crc kubenswrapper[5058]: I1014 10:05:35.033770 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461"} Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.159998 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nqprz"] Oct 14 10:05:58 crc kubenswrapper[5058]: E1014 10:05:58.160944 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerName="extract-content" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.160955 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerName="extract-content" Oct 14 10:05:58 crc kubenswrapper[5058]: E1014 10:05:58.160972 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerName="extract-utilities" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.160978 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerName="extract-utilities" Oct 14 10:05:58 crc kubenswrapper[5058]: E1014 10:05:58.160997 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerName="registry-server" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.161004 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerName="registry-server" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.161402 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="3364c551-c5d2-4e92-b39a-8c994d81f2ce" containerName="registry-server" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.162942 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.178975 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqprz"] Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.279408 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-utilities\") pod \"redhat-operators-nqprz\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.279584 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvm7t\" (UniqueName: \"kubernetes.io/projected/bda57de1-a617-42a6-876d-25154f1dbb03-kube-api-access-vvm7t\") pod \"redhat-operators-nqprz\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.280255 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-catalog-content\") pod \"redhat-operators-nqprz\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.382980 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvm7t\" (UniqueName: \"kubernetes.io/projected/bda57de1-a617-42a6-876d-25154f1dbb03-kube-api-access-vvm7t\") pod \"redhat-operators-nqprz\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.383068 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-catalog-content\") pod \"redhat-operators-nqprz\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.383267 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-utilities\") pod \"redhat-operators-nqprz\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.383613 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-catalog-content\") pod \"redhat-operators-nqprz\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.383668 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-utilities\") pod \"redhat-operators-nqprz\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.406667 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvm7t\" (UniqueName: \"kubernetes.io/projected/bda57de1-a617-42a6-876d-25154f1dbb03-kube-api-access-vvm7t\") pod \"redhat-operators-nqprz\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.484464 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:05:58 crc kubenswrapper[5058]: I1014 10:05:58.978745 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqprz"] Oct 14 10:05:59 crc kubenswrapper[5058]: I1014 10:05:59.332336 5058 generic.go:334] "Generic (PLEG): container finished" podID="bda57de1-a617-42a6-876d-25154f1dbb03" containerID="584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71" exitCode=0 Oct 14 10:05:59 crc kubenswrapper[5058]: I1014 10:05:59.332377 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqprz" event={"ID":"bda57de1-a617-42a6-876d-25154f1dbb03","Type":"ContainerDied","Data":"584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71"} Oct 14 10:05:59 crc kubenswrapper[5058]: I1014 10:05:59.332402 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqprz" event={"ID":"bda57de1-a617-42a6-876d-25154f1dbb03","Type":"ContainerStarted","Data":"cff8f8a60d4074fc714566e61b37ca732194c7047bf520781d72dc39c9e72d6f"} Oct 14 10:06:00 crc kubenswrapper[5058]: I1014 10:06:00.343589 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqprz" event={"ID":"bda57de1-a617-42a6-876d-25154f1dbb03","Type":"ContainerStarted","Data":"98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6"} Oct 14 10:06:04 crc kubenswrapper[5058]: I1014 10:06:04.416992 5058 generic.go:334] "Generic (PLEG): container finished" podID="bda57de1-a617-42a6-876d-25154f1dbb03" containerID="98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6" exitCode=0 Oct 14 10:06:04 crc kubenswrapper[5058]: I1014 10:06:04.417058 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqprz" event={"ID":"bda57de1-a617-42a6-876d-25154f1dbb03","Type":"ContainerDied","Data":"98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6"} Oct 14 10:06:05 crc kubenswrapper[5058]: I1014 10:06:05.431422 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqprz" event={"ID":"bda57de1-a617-42a6-876d-25154f1dbb03","Type":"ContainerStarted","Data":"0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797"} Oct 14 10:06:05 crc kubenswrapper[5058]: I1014 10:06:05.461315 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nqprz" podStartSLOduration=1.9491027380000001 podStartE2EDuration="7.461293392s" podCreationTimestamp="2025-10-14 10:05:58 +0000 UTC" firstStartedPulling="2025-10-14 10:05:59.335276184 +0000 UTC m=+11907.246359980" lastFinishedPulling="2025-10-14 10:06:04.847466808 +0000 UTC m=+11912.758550634" observedRunningTime="2025-10-14 10:06:05.455318502 +0000 UTC m=+11913.366402348" watchObservedRunningTime="2025-10-14 10:06:05.461293392 +0000 UTC m=+11913.372377218" Oct 14 10:06:08 crc kubenswrapper[5058]: I1014 10:06:08.485208 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:06:08 crc kubenswrapper[5058]: I1014 10:06:08.485693 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:06:09 crc kubenswrapper[5058]: I1014 10:06:09.557970 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nqprz" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" containerName="registry-server" probeResult="failure" output=< Oct 14 10:06:09 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 10:06:09 crc kubenswrapper[5058]: > Oct 14 10:06:18 crc kubenswrapper[5058]: I1014 10:06:18.567675 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:06:18 crc kubenswrapper[5058]: I1014 10:06:18.648695 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:06:18 crc kubenswrapper[5058]: I1014 10:06:18.821002 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqprz"] Oct 14 10:06:19 crc kubenswrapper[5058]: I1014 10:06:19.601610 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nqprz" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" containerName="registry-server" containerID="cri-o://0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797" gracePeriod=2 Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.305405 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.358765 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvm7t\" (UniqueName: \"kubernetes.io/projected/bda57de1-a617-42a6-876d-25154f1dbb03-kube-api-access-vvm7t\") pod \"bda57de1-a617-42a6-876d-25154f1dbb03\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.358930 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-utilities\") pod \"bda57de1-a617-42a6-876d-25154f1dbb03\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.359148 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-catalog-content\") pod \"bda57de1-a617-42a6-876d-25154f1dbb03\" (UID: \"bda57de1-a617-42a6-876d-25154f1dbb03\") " Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.360509 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-utilities" (OuterVolumeSpecName: "utilities") pod "bda57de1-a617-42a6-876d-25154f1dbb03" (UID: "bda57de1-a617-42a6-876d-25154f1dbb03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.366621 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda57de1-a617-42a6-876d-25154f1dbb03-kube-api-access-vvm7t" (OuterVolumeSpecName: "kube-api-access-vvm7t") pod "bda57de1-a617-42a6-876d-25154f1dbb03" (UID: "bda57de1-a617-42a6-876d-25154f1dbb03"). InnerVolumeSpecName "kube-api-access-vvm7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.449454 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bda57de1-a617-42a6-876d-25154f1dbb03" (UID: "bda57de1-a617-42a6-876d-25154f1dbb03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.461521 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvm7t\" (UniqueName: \"kubernetes.io/projected/bda57de1-a617-42a6-876d-25154f1dbb03-kube-api-access-vvm7t\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.461560 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.461570 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda57de1-a617-42a6-876d-25154f1dbb03-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.614049 5058 generic.go:334] "Generic (PLEG): container finished" podID="bda57de1-a617-42a6-876d-25154f1dbb03" containerID="0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797" exitCode=0 Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.614096 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqprz" event={"ID":"bda57de1-a617-42a6-876d-25154f1dbb03","Type":"ContainerDied","Data":"0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797"} Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.614114 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqprz" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.614372 5058 scope.go:117] "RemoveContainer" containerID="0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.614358 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqprz" event={"ID":"bda57de1-a617-42a6-876d-25154f1dbb03","Type":"ContainerDied","Data":"cff8f8a60d4074fc714566e61b37ca732194c7047bf520781d72dc39c9e72d6f"} Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.637378 5058 scope.go:117] "RemoveContainer" containerID="98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.658542 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqprz"] Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.668861 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nqprz"] Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.691193 5058 scope.go:117] "RemoveContainer" containerID="584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.731046 5058 scope.go:117] "RemoveContainer" containerID="0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797" Oct 14 10:06:20 crc kubenswrapper[5058]: E1014 10:06:20.731885 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797\": container with ID starting with 0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797 not found: ID does not exist" containerID="0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.731927 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797"} err="failed to get container status \"0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797\": rpc error: code = NotFound desc = could not find container \"0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797\": container with ID starting with 0738607ec54f086c9a69fca5c114add01c290600309039c103a8f1c145827797 not found: ID does not exist" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.731950 5058 scope.go:117] "RemoveContainer" containerID="98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6" Oct 14 10:06:20 crc kubenswrapper[5058]: E1014 10:06:20.732444 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6\": container with ID starting with 98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6 not found: ID does not exist" containerID="98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.732478 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6"} err="failed to get container status \"98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6\": rpc error: code = NotFound desc = could not find container \"98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6\": container with ID starting with 98fe8468936eb47924606f72ab24a1cefd69ee3d57b67a25938196680b4ae7a6 not found: ID does not exist" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.732500 5058 scope.go:117] "RemoveContainer" containerID="584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71" Oct 14 10:06:20 crc kubenswrapper[5058]: E1014 10:06:20.732865 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71\": container with ID starting with 584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71 not found: ID does not exist" containerID="584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.732891 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71"} err="failed to get container status \"584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71\": rpc error: code = NotFound desc = could not find container \"584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71\": container with ID starting with 584b447e94a802044b79a81b212615948ba1f82cab46c4851c6e35071365bb71 not found: ID does not exist" Oct 14 10:06:20 crc kubenswrapper[5058]: I1014 10:06:20.805304 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" path="/var/lib/kubelet/pods/bda57de1-a617-42a6-876d-25154f1dbb03/volumes" Oct 14 10:08:03 crc kubenswrapper[5058]: I1014 10:08:03.656715 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:08:03 crc kubenswrapper[5058]: I1014 10:08:03.657446 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:08:33 crc kubenswrapper[5058]: I1014 10:08:33.656347 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:08:33 crc kubenswrapper[5058]: I1014 10:08:33.657377 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.149667 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dp55c"] Oct 14 10:08:55 crc kubenswrapper[5058]: E1014 10:08:55.150979 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" containerName="registry-server" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.151003 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" containerName="registry-server" Oct 14 10:08:55 crc kubenswrapper[5058]: E1014 10:08:55.151068 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" containerName="extract-utilities" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.151082 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" containerName="extract-utilities" Oct 14 10:08:55 crc kubenswrapper[5058]: E1014 10:08:55.151098 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" containerName="extract-content" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.151111 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" containerName="extract-content" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.151505 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda57de1-a617-42a6-876d-25154f1dbb03" containerName="registry-server" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.154397 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.165969 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp55c"] Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.237687 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-catalog-content\") pod \"redhat-marketplace-dp55c\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.237971 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89sv\" (UniqueName: \"kubernetes.io/projected/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-kube-api-access-t89sv\") pod \"redhat-marketplace-dp55c\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.238432 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-utilities\") pod \"redhat-marketplace-dp55c\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.340783 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-utilities\") pod \"redhat-marketplace-dp55c\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.341048 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-catalog-content\") pod \"redhat-marketplace-dp55c\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.341104 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t89sv\" (UniqueName: \"kubernetes.io/projected/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-kube-api-access-t89sv\") pod \"redhat-marketplace-dp55c\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.341366 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-utilities\") pod \"redhat-marketplace-dp55c\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.341706 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-catalog-content\") pod \"redhat-marketplace-dp55c\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.375849 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t89sv\" (UniqueName: \"kubernetes.io/projected/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-kube-api-access-t89sv\") pod \"redhat-marketplace-dp55c\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:55 crc kubenswrapper[5058]: I1014 10:08:55.488295 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:08:56 crc kubenswrapper[5058]: I1014 10:08:56.048819 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp55c"] Oct 14 10:08:56 crc kubenswrapper[5058]: I1014 10:08:56.498029 5058 generic.go:334] "Generic (PLEG): container finished" podID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerID="3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36" exitCode=0 Oct 14 10:08:56 crc kubenswrapper[5058]: I1014 10:08:56.498148 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp55c" event={"ID":"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e","Type":"ContainerDied","Data":"3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36"} Oct 14 10:08:56 crc kubenswrapper[5058]: I1014 10:08:56.498282 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp55c" event={"ID":"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e","Type":"ContainerStarted","Data":"79dc2a23118b67d13fcf6316f38d868e0de897c992ebe936155a6f248f765598"} Oct 14 10:08:56 crc kubenswrapper[5058]: I1014 10:08:56.501739 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:08:57 crc kubenswrapper[5058]: I1014 10:08:57.510591 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp55c" event={"ID":"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e","Type":"ContainerStarted","Data":"50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1"} Oct 14 10:08:58 crc kubenswrapper[5058]: I1014 10:08:58.550447 5058 generic.go:334] "Generic (PLEG): container finished" podID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerID="50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1" exitCode=0 Oct 14 10:08:58 crc kubenswrapper[5058]: I1014 10:08:58.550505 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp55c" event={"ID":"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e","Type":"ContainerDied","Data":"50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1"} Oct 14 10:08:59 crc kubenswrapper[5058]: I1014 10:08:59.567989 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp55c" event={"ID":"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e","Type":"ContainerStarted","Data":"300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0"} Oct 14 10:09:03 crc kubenswrapper[5058]: I1014 10:09:03.656173 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:09:03 crc kubenswrapper[5058]: I1014 10:09:03.657039 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:09:03 crc kubenswrapper[5058]: I1014 10:09:03.657108 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 10:09:03 crc kubenswrapper[5058]: I1014 10:09:03.658267 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:09:03 crc kubenswrapper[5058]: I1014 10:09:03.658379 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" gracePeriod=600 Oct 14 10:09:03 crc kubenswrapper[5058]: E1014 10:09:03.785254 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:09:04 crc kubenswrapper[5058]: I1014 10:09:04.638093 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" exitCode=0 Oct 14 10:09:04 crc kubenswrapper[5058]: I1014 10:09:04.638138 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461"} Oct 14 10:09:04 crc kubenswrapper[5058]: I1014 10:09:04.638174 5058 scope.go:117] "RemoveContainer" containerID="a11bbc342d4620eecf266d40e16f0336b8ed6ae65ff1d9134eddf56353c7f16f" Oct 14 10:09:04 crc kubenswrapper[5058]: I1014 10:09:04.639222 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:09:04 crc kubenswrapper[5058]: E1014 10:09:04.639880 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:09:04 crc kubenswrapper[5058]: I1014 10:09:04.675993 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dp55c" podStartSLOduration=6.912089077 podStartE2EDuration="9.675960023s" podCreationTimestamp="2025-10-14 10:08:55 +0000 UTC" firstStartedPulling="2025-10-14 10:08:56.501538846 +0000 UTC m=+12084.412622642" lastFinishedPulling="2025-10-14 10:08:59.265409772 +0000 UTC m=+12087.176493588" observedRunningTime="2025-10-14 10:08:59.593205341 +0000 UTC m=+12087.504289157" watchObservedRunningTime="2025-10-14 10:09:04.675960023 +0000 UTC m=+12092.587043869" Oct 14 10:09:05 crc kubenswrapper[5058]: I1014 10:09:05.488407 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:09:05 crc kubenswrapper[5058]: I1014 10:09:05.488867 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:09:05 crc kubenswrapper[5058]: I1014 10:09:05.566127 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:09:05 crc kubenswrapper[5058]: I1014 10:09:05.701828 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:09:07 crc kubenswrapper[5058]: I1014 10:09:07.529127 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp55c"] Oct 14 10:09:07 crc kubenswrapper[5058]: I1014 10:09:07.684086 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dp55c" podUID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerName="registry-server" containerID="cri-o://300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0" gracePeriod=2 Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.341303 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.519954 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-catalog-content\") pod \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.520234 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-utilities\") pod \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.520371 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t89sv\" (UniqueName: \"kubernetes.io/projected/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-kube-api-access-t89sv\") pod \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\" (UID: \"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e\") " Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.521026 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-utilities" (OuterVolumeSpecName: "utilities") pod "a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" (UID: "a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.527988 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-kube-api-access-t89sv" (OuterVolumeSpecName: "kube-api-access-t89sv") pod "a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" (UID: "a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e"). InnerVolumeSpecName "kube-api-access-t89sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.539252 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" (UID: "a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.622780 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t89sv\" (UniqueName: \"kubernetes.io/projected/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-kube-api-access-t89sv\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.622836 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.622853 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.697258 5058 generic.go:334] "Generic (PLEG): container finished" podID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerID="300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0" exitCode=0 Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.697297 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp55c" event={"ID":"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e","Type":"ContainerDied","Data":"300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0"} Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.697326 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp55c" event={"ID":"a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e","Type":"ContainerDied","Data":"79dc2a23118b67d13fcf6316f38d868e0de897c992ebe936155a6f248f765598"} Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.697344 5058 scope.go:117] "RemoveContainer" containerID="300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.697384 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp55c" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.737043 5058 scope.go:117] "RemoveContainer" containerID="50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.748739 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp55c"] Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.759277 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp55c"] Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.796516 5058 scope.go:117] "RemoveContainer" containerID="3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.813909 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" path="/var/lib/kubelet/pods/a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e/volumes" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.825981 5058 scope.go:117] "RemoveContainer" containerID="300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0" Oct 14 10:09:08 crc kubenswrapper[5058]: E1014 10:09:08.826596 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0\": container with ID starting with 300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0 not found: ID does not exist" containerID="300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.826658 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0"} err="failed to get container status \"300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0\": rpc error: code = NotFound desc = could not find container \"300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0\": container with ID starting with 300e7e2fa9fa066954334e001ed441eb4ab8e6aca20946c5c44a4931f9f89af0 not found: ID does not exist" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.826689 5058 scope.go:117] "RemoveContainer" containerID="50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1" Oct 14 10:09:08 crc kubenswrapper[5058]: E1014 10:09:08.827141 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1\": container with ID starting with 50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1 not found: ID does not exist" containerID="50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.827175 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1"} err="failed to get container status \"50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1\": rpc error: code = NotFound desc = could not find container \"50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1\": container with ID starting with 50c3b328a557d0c0eeb8cc41d5fd64d5da73fa5911ef75b89246fb8d306b8fe1 not found: ID does not exist" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.827239 5058 scope.go:117] "RemoveContainer" containerID="3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36" Oct 14 10:09:08 crc kubenswrapper[5058]: E1014 10:09:08.827839 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36\": container with ID starting with 3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36 not found: ID does not exist" containerID="3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36" Oct 14 10:09:08 crc kubenswrapper[5058]: I1014 10:09:08.827881 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36"} err="failed to get container status \"3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36\": rpc error: code = NotFound desc = could not find container \"3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36\": container with ID starting with 3cf23bed86e671c22bd74ea6afaf6f08dddf4feaa6971f58a136fc099c36cd36 not found: ID does not exist" Oct 14 10:09:18 crc kubenswrapper[5058]: I1014 10:09:18.790988 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:09:18 crc kubenswrapper[5058]: E1014 10:09:18.792209 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:09:30 crc kubenswrapper[5058]: I1014 10:09:30.790600 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:09:30 crc kubenswrapper[5058]: E1014 10:09:30.791409 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:09:42 crc kubenswrapper[5058]: I1014 10:09:42.804392 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:09:42 crc kubenswrapper[5058]: E1014 10:09:42.805115 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:09:54 crc kubenswrapper[5058]: I1014 10:09:54.794409 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:09:54 crc kubenswrapper[5058]: E1014 10:09:54.795897 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:10:08 crc kubenswrapper[5058]: I1014 10:10:08.790490 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:10:08 crc kubenswrapper[5058]: E1014 10:10:08.791554 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:10:20 crc kubenswrapper[5058]: I1014 10:10:20.789696 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:10:20 crc kubenswrapper[5058]: E1014 10:10:20.790618 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:10:21 crc kubenswrapper[5058]: I1014 10:10:21.592565 5058 generic.go:334] "Generic (PLEG): container finished" podID="e68b0860-bc9e-43b0-a2c5-4fd969d4149a" containerID="5f67b61b399cc68fe4dbd744b87fc987210a6de0e4326aff71aa58be3d350889" exitCode=0 Oct 14 10:10:21 crc kubenswrapper[5058]: I1014 10:10:21.592703 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e68b0860-bc9e-43b0-a2c5-4fd969d4149a","Type":"ContainerDied","Data":"5f67b61b399cc68fe4dbd744b87fc987210a6de0e4326aff71aa58be3d350889"} Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.214755 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.375322 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-config-data\") pod \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.375427 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpxlp\" (UniqueName: \"kubernetes.io/projected/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-kube-api-access-cpxlp\") pod \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.375461 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config\") pod \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.375708 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.375765 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-temporary\") pod \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.375832 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config-secret\") pod \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.375858 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ssh-key\") pod \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.375876 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ca-certs\") pod \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.375907 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-workdir\") pod \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\" (UID: \"e68b0860-bc9e-43b0-a2c5-4fd969d4149a\") " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.376935 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e68b0860-bc9e-43b0-a2c5-4fd969d4149a" (UID: "e68b0860-bc9e-43b0-a2c5-4fd969d4149a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.377388 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-config-data" (OuterVolumeSpecName: "config-data") pod "e68b0860-bc9e-43b0-a2c5-4fd969d4149a" (UID: "e68b0860-bc9e-43b0-a2c5-4fd969d4149a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.381864 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e68b0860-bc9e-43b0-a2c5-4fd969d4149a" (UID: "e68b0860-bc9e-43b0-a2c5-4fd969d4149a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.382016 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e68b0860-bc9e-43b0-a2c5-4fd969d4149a" (UID: "e68b0860-bc9e-43b0-a2c5-4fd969d4149a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.383937 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-kube-api-access-cpxlp" (OuterVolumeSpecName: "kube-api-access-cpxlp") pod "e68b0860-bc9e-43b0-a2c5-4fd969d4149a" (UID: "e68b0860-bc9e-43b0-a2c5-4fd969d4149a"). InnerVolumeSpecName "kube-api-access-cpxlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.406263 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e68b0860-bc9e-43b0-a2c5-4fd969d4149a" (UID: "e68b0860-bc9e-43b0-a2c5-4fd969d4149a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.411713 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e68b0860-bc9e-43b0-a2c5-4fd969d4149a" (UID: "e68b0860-bc9e-43b0-a2c5-4fd969d4149a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.419544 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e68b0860-bc9e-43b0-a2c5-4fd969d4149a" (UID: "e68b0860-bc9e-43b0-a2c5-4fd969d4149a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.445789 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e68b0860-bc9e-43b0-a2c5-4fd969d4149a" (UID: "e68b0860-bc9e-43b0-a2c5-4fd969d4149a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.480062 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpxlp\" (UniqueName: \"kubernetes.io/projected/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-kube-api-access-cpxlp\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.480356 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.480490 5058 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.480581 5058 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.480667 5058 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.480759 5058 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.480872 5058 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.480971 5058 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.481062 5058 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e68b0860-bc9e-43b0-a2c5-4fd969d4149a-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.529018 5058 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.584148 5058 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.621711 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e68b0860-bc9e-43b0-a2c5-4fd969d4149a","Type":"ContainerDied","Data":"6044125e936bdfdd7358e455d1be0e42d4c0733150a7b5e8de7b87a2a4ce3507"} Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.621756 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6044125e936bdfdd7358e455d1be0e42d4c0733150a7b5e8de7b87a2a4ce3507" Oct 14 10:10:23 crc kubenswrapper[5058]: I1014 10:10:23.621778 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 10:10:32 crc kubenswrapper[5058]: I1014 10:10:32.803243 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:10:32 crc kubenswrapper[5058]: E1014 10:10:32.803877 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.863473 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 10:10:35 crc kubenswrapper[5058]: E1014 10:10:35.864366 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68b0860-bc9e-43b0-a2c5-4fd969d4149a" containerName="tempest-tests-tempest-tests-runner" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.864382 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68b0860-bc9e-43b0-a2c5-4fd969d4149a" containerName="tempest-tests-tempest-tests-runner" Oct 14 10:10:35 crc kubenswrapper[5058]: E1014 10:10:35.864402 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerName="extract-content" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.864409 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerName="extract-content" Oct 14 10:10:35 crc kubenswrapper[5058]: E1014 10:10:35.864443 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerName="registry-server" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.864451 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerName="registry-server" Oct 14 10:10:35 crc kubenswrapper[5058]: E1014 10:10:35.864468 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerName="extract-utilities" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.864475 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerName="extract-utilities" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.864764 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68b0860-bc9e-43b0-a2c5-4fd969d4149a" containerName="tempest-tests-tempest-tests-runner" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.864809 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0fc57e2-fc5a-4ece-b4c4-58749cfdb00e" containerName="registry-server" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.865823 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.868751 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vjsll" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.886016 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.944066 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b20631a1-da6f-4256-8ec2-15517ec5767a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 10:10:35 crc kubenswrapper[5058]: I1014 10:10:35.944361 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m654\" (UniqueName: \"kubernetes.io/projected/b20631a1-da6f-4256-8ec2-15517ec5767a-kube-api-access-5m654\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b20631a1-da6f-4256-8ec2-15517ec5767a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 10:10:36 crc kubenswrapper[5058]: I1014 10:10:36.048413 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b20631a1-da6f-4256-8ec2-15517ec5767a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 10:10:36 crc kubenswrapper[5058]: I1014 10:10:36.048484 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m654\" (UniqueName: \"kubernetes.io/projected/b20631a1-da6f-4256-8ec2-15517ec5767a-kube-api-access-5m654\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b20631a1-da6f-4256-8ec2-15517ec5767a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 10:10:36 crc kubenswrapper[5058]: I1014 10:10:36.049106 5058 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b20631a1-da6f-4256-8ec2-15517ec5767a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 10:10:36 crc kubenswrapper[5058]: I1014 10:10:36.072446 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m654\" (UniqueName: \"kubernetes.io/projected/b20631a1-da6f-4256-8ec2-15517ec5767a-kube-api-access-5m654\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b20631a1-da6f-4256-8ec2-15517ec5767a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 10:10:36 crc kubenswrapper[5058]: I1014 10:10:36.096987 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b20631a1-da6f-4256-8ec2-15517ec5767a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 10:10:36 crc kubenswrapper[5058]: I1014 10:10:36.190980 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 10:10:36 crc kubenswrapper[5058]: I1014 10:10:36.676757 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 10:10:36 crc kubenswrapper[5058]: I1014 10:10:36.812042 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b20631a1-da6f-4256-8ec2-15517ec5767a","Type":"ContainerStarted","Data":"d7c4a25fb0b35c32db3985bbb7e00b04d0275948e8000805f6cd6c1adfc1b491"} Oct 14 10:10:38 crc kubenswrapper[5058]: I1014 10:10:38.855788 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b20631a1-da6f-4256-8ec2-15517ec5767a","Type":"ContainerStarted","Data":"4a193d2ac28bd479ff707b1a249ff7f7a18f6d067cbb58491acbfb88554d24db"} Oct 14 10:10:38 crc kubenswrapper[5058]: I1014 10:10:38.871612 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.692641714 podStartE2EDuration="3.871595402s" podCreationTimestamp="2025-10-14 10:10:35 +0000 UTC" firstStartedPulling="2025-10-14 10:10:36.680456898 +0000 UTC m=+12184.591540714" lastFinishedPulling="2025-10-14 10:10:37.859410596 +0000 UTC m=+12185.770494402" observedRunningTime="2025-10-14 10:10:38.866930479 +0000 UTC m=+12186.778014305" watchObservedRunningTime="2025-10-14 10:10:38.871595402 +0000 UTC m=+12186.782679198" Oct 14 10:10:43 crc kubenswrapper[5058]: I1014 10:10:43.791036 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:10:43 crc kubenswrapper[5058]: E1014 10:10:43.792244 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:10:55 crc kubenswrapper[5058]: I1014 10:10:55.791528 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:10:55 crc kubenswrapper[5058]: E1014 10:10:55.792433 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:11:07 crc kubenswrapper[5058]: I1014 10:11:07.791225 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:11:07 crc kubenswrapper[5058]: E1014 10:11:07.792602 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:11:22 crc kubenswrapper[5058]: I1014 10:11:22.805658 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:11:22 crc kubenswrapper[5058]: E1014 10:11:22.806891 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.465871 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jb8bt/must-gather-kt7kp"] Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.468943 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.471112 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jb8bt"/"openshift-service-ca.crt" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.471142 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jb8bt"/"kube-root-ca.crt" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.474896 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jb8bt"/"default-dockercfg-wj98j" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.478279 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jb8bt/must-gather-kt7kp"] Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.506865 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx4fh\" (UniqueName: \"kubernetes.io/projected/85236c25-f74f-4600-afa9-09431b234bc5-kube-api-access-sx4fh\") pod \"must-gather-kt7kp\" (UID: \"85236c25-f74f-4600-afa9-09431b234bc5\") " pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.507016 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85236c25-f74f-4600-afa9-09431b234bc5-must-gather-output\") pod \"must-gather-kt7kp\" (UID: \"85236c25-f74f-4600-afa9-09431b234bc5\") " pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.608393 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85236c25-f74f-4600-afa9-09431b234bc5-must-gather-output\") pod \"must-gather-kt7kp\" (UID: \"85236c25-f74f-4600-afa9-09431b234bc5\") " pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.608519 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx4fh\" (UniqueName: \"kubernetes.io/projected/85236c25-f74f-4600-afa9-09431b234bc5-kube-api-access-sx4fh\") pod \"must-gather-kt7kp\" (UID: \"85236c25-f74f-4600-afa9-09431b234bc5\") " pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.609014 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85236c25-f74f-4600-afa9-09431b234bc5-must-gather-output\") pod \"must-gather-kt7kp\" (UID: \"85236c25-f74f-4600-afa9-09431b234bc5\") " pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.627951 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx4fh\" (UniqueName: \"kubernetes.io/projected/85236c25-f74f-4600-afa9-09431b234bc5-kube-api-access-sx4fh\") pod \"must-gather-kt7kp\" (UID: \"85236c25-f74f-4600-afa9-09431b234bc5\") " pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.787011 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:11:34 crc kubenswrapper[5058]: I1014 10:11:34.790833 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:11:34 crc kubenswrapper[5058]: E1014 10:11:34.791119 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:11:35 crc kubenswrapper[5058]: I1014 10:11:35.280612 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jb8bt/must-gather-kt7kp"] Oct 14 10:11:35 crc kubenswrapper[5058]: I1014 10:11:35.596822 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" event={"ID":"85236c25-f74f-4600-afa9-09431b234bc5","Type":"ContainerStarted","Data":"058ebe9266fa11b70d9d2ce43a0fbf82b0e0acf6e9db10a36d61fe037b7a23b6"} Oct 14 10:11:40 crc kubenswrapper[5058]: I1014 10:11:40.680828 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" event={"ID":"85236c25-f74f-4600-afa9-09431b234bc5","Type":"ContainerStarted","Data":"047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d"} Oct 14 10:11:40 crc kubenswrapper[5058]: I1014 10:11:40.681484 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" event={"ID":"85236c25-f74f-4600-afa9-09431b234bc5","Type":"ContainerStarted","Data":"3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5"} Oct 14 10:11:40 crc kubenswrapper[5058]: I1014 10:11:40.701914 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" podStartSLOduration=2.355103711 podStartE2EDuration="6.701893769s" podCreationTimestamp="2025-10-14 10:11:34 +0000 UTC" firstStartedPulling="2025-10-14 10:11:35.288283972 +0000 UTC m=+12243.199367788" lastFinishedPulling="2025-10-14 10:11:39.63507403 +0000 UTC m=+12247.546157846" observedRunningTime="2025-10-14 10:11:40.695885698 +0000 UTC m=+12248.606969514" watchObservedRunningTime="2025-10-14 10:11:40.701893769 +0000 UTC m=+12248.612977575" Oct 14 10:11:43 crc kubenswrapper[5058]: E1014 10:11:43.325474 5058 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.169:58458->38.102.83.169:42833: write tcp 38.102.83.169:58458->38.102.83.169:42833: write: broken pipe Oct 14 10:11:44 crc kubenswrapper[5058]: I1014 10:11:44.473930 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jb8bt/crc-debug-ddkj8"] Oct 14 10:11:44 crc kubenswrapper[5058]: I1014 10:11:44.476542 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:11:44 crc kubenswrapper[5058]: I1014 10:11:44.635178 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt9vh\" (UniqueName: \"kubernetes.io/projected/89c8bccc-ef12-4475-bc58-9139d5f602b6-kube-api-access-vt9vh\") pod \"crc-debug-ddkj8\" (UID: \"89c8bccc-ef12-4475-bc58-9139d5f602b6\") " pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:11:44 crc kubenswrapper[5058]: I1014 10:11:44.635335 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89c8bccc-ef12-4475-bc58-9139d5f602b6-host\") pod \"crc-debug-ddkj8\" (UID: \"89c8bccc-ef12-4475-bc58-9139d5f602b6\") " pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:11:44 crc kubenswrapper[5058]: I1014 10:11:44.736716 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89c8bccc-ef12-4475-bc58-9139d5f602b6-host\") pod \"crc-debug-ddkj8\" (UID: \"89c8bccc-ef12-4475-bc58-9139d5f602b6\") " pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:11:44 crc kubenswrapper[5058]: I1014 10:11:44.736863 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89c8bccc-ef12-4475-bc58-9139d5f602b6-host\") pod \"crc-debug-ddkj8\" (UID: \"89c8bccc-ef12-4475-bc58-9139d5f602b6\") " pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:11:44 crc kubenswrapper[5058]: I1014 10:11:44.736898 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt9vh\" (UniqueName: \"kubernetes.io/projected/89c8bccc-ef12-4475-bc58-9139d5f602b6-kube-api-access-vt9vh\") pod \"crc-debug-ddkj8\" (UID: \"89c8bccc-ef12-4475-bc58-9139d5f602b6\") " pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:11:44 crc kubenswrapper[5058]: I1014 10:11:44.757216 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt9vh\" (UniqueName: \"kubernetes.io/projected/89c8bccc-ef12-4475-bc58-9139d5f602b6-kube-api-access-vt9vh\") pod \"crc-debug-ddkj8\" (UID: \"89c8bccc-ef12-4475-bc58-9139d5f602b6\") " pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:11:44 crc kubenswrapper[5058]: I1014 10:11:44.801365 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:11:44 crc kubenswrapper[5058]: W1014 10:11:44.869595 5058 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89c8bccc_ef12_4475_bc58_9139d5f602b6.slice/crio-8bdffaef974f9acb526b68f2a896cf011b71766aef44b77a27a2128d545366b8 WatchSource:0}: Error finding container 8bdffaef974f9acb526b68f2a896cf011b71766aef44b77a27a2128d545366b8: Status 404 returned error can't find the container with id 8bdffaef974f9acb526b68f2a896cf011b71766aef44b77a27a2128d545366b8 Oct 14 10:11:45 crc kubenswrapper[5058]: I1014 10:11:45.748194 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" event={"ID":"89c8bccc-ef12-4475-bc58-9139d5f602b6","Type":"ContainerStarted","Data":"8bdffaef974f9acb526b68f2a896cf011b71766aef44b77a27a2128d545366b8"} Oct 14 10:11:47 crc kubenswrapper[5058]: I1014 10:11:47.790192 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:11:47 crc kubenswrapper[5058]: E1014 10:11:47.791200 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:11:55 crc kubenswrapper[5058]: I1014 10:11:55.853368 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" event={"ID":"89c8bccc-ef12-4475-bc58-9139d5f602b6","Type":"ContainerStarted","Data":"4e1268146c1b388556f4a0b2b06eb423894a95bca8862e76e9817103924808e7"} Oct 14 10:11:55 crc kubenswrapper[5058]: I1014 10:11:55.868929 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" podStartSLOduration=1.261709924 podStartE2EDuration="11.868913384s" podCreationTimestamp="2025-10-14 10:11:44 +0000 UTC" firstStartedPulling="2025-10-14 10:11:44.878071237 +0000 UTC m=+12252.789155043" lastFinishedPulling="2025-10-14 10:11:55.485274667 +0000 UTC m=+12263.396358503" observedRunningTime="2025-10-14 10:11:55.864112107 +0000 UTC m=+12263.775195913" watchObservedRunningTime="2025-10-14 10:11:55.868913384 +0000 UTC m=+12263.779997180" Oct 14 10:12:01 crc kubenswrapper[5058]: I1014 10:12:01.790379 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:12:01 crc kubenswrapper[5058]: E1014 10:12:01.791493 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:12:14 crc kubenswrapper[5058]: I1014 10:12:14.791055 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:12:14 crc kubenswrapper[5058]: E1014 10:12:14.791919 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:12:19 crc kubenswrapper[5058]: I1014 10:12:19.094506 5058 generic.go:334] "Generic (PLEG): container finished" podID="89c8bccc-ef12-4475-bc58-9139d5f602b6" containerID="4e1268146c1b388556f4a0b2b06eb423894a95bca8862e76e9817103924808e7" exitCode=0 Oct 14 10:12:19 crc kubenswrapper[5058]: I1014 10:12:19.094586 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" event={"ID":"89c8bccc-ef12-4475-bc58-9139d5f602b6","Type":"ContainerDied","Data":"4e1268146c1b388556f4a0b2b06eb423894a95bca8862e76e9817103924808e7"} Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.249111 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.287800 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jb8bt/crc-debug-ddkj8"] Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.298625 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jb8bt/crc-debug-ddkj8"] Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.405469 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt9vh\" (UniqueName: \"kubernetes.io/projected/89c8bccc-ef12-4475-bc58-9139d5f602b6-kube-api-access-vt9vh\") pod \"89c8bccc-ef12-4475-bc58-9139d5f602b6\" (UID: \"89c8bccc-ef12-4475-bc58-9139d5f602b6\") " Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.405540 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89c8bccc-ef12-4475-bc58-9139d5f602b6-host\") pod \"89c8bccc-ef12-4475-bc58-9139d5f602b6\" (UID: \"89c8bccc-ef12-4475-bc58-9139d5f602b6\") " Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.406015 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c8bccc-ef12-4475-bc58-9139d5f602b6-host" (OuterVolumeSpecName: "host") pod "89c8bccc-ef12-4475-bc58-9139d5f602b6" (UID: "89c8bccc-ef12-4475-bc58-9139d5f602b6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.406696 5058 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89c8bccc-ef12-4475-bc58-9139d5f602b6-host\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.424171 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c8bccc-ef12-4475-bc58-9139d5f602b6-kube-api-access-vt9vh" (OuterVolumeSpecName: "kube-api-access-vt9vh") pod "89c8bccc-ef12-4475-bc58-9139d5f602b6" (UID: "89c8bccc-ef12-4475-bc58-9139d5f602b6"). InnerVolumeSpecName "kube-api-access-vt9vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.508950 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt9vh\" (UniqueName: \"kubernetes.io/projected/89c8bccc-ef12-4475-bc58-9139d5f602b6-kube-api-access-vt9vh\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:20 crc kubenswrapper[5058]: I1014 10:12:20.806313 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c8bccc-ef12-4475-bc58-9139d5f602b6" path="/var/lib/kubelet/pods/89c8bccc-ef12-4475-bc58-9139d5f602b6/volumes" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.118958 5058 scope.go:117] "RemoveContainer" containerID="4e1268146c1b388556f4a0b2b06eb423894a95bca8862e76e9817103924808e7" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.119017 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/crc-debug-ddkj8" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.534033 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jb8bt/crc-debug-5jb2q"] Oct 14 10:12:21 crc kubenswrapper[5058]: E1014 10:12:21.534492 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c8bccc-ef12-4475-bc58-9139d5f602b6" containerName="container-00" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.534504 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c8bccc-ef12-4475-bc58-9139d5f602b6" containerName="container-00" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.534684 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c8bccc-ef12-4475-bc58-9139d5f602b6" containerName="container-00" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.535467 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.630180 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e43cbed3-aadc-4991-ac85-da6f90829b28-host\") pod \"crc-debug-5jb2q\" (UID: \"e43cbed3-aadc-4991-ac85-da6f90829b28\") " pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.630230 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgpt\" (UniqueName: \"kubernetes.io/projected/e43cbed3-aadc-4991-ac85-da6f90829b28-kube-api-access-8fgpt\") pod \"crc-debug-5jb2q\" (UID: \"e43cbed3-aadc-4991-ac85-da6f90829b28\") " pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.732221 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e43cbed3-aadc-4991-ac85-da6f90829b28-host\") pod \"crc-debug-5jb2q\" (UID: \"e43cbed3-aadc-4991-ac85-da6f90829b28\") " pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.732269 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fgpt\" (UniqueName: \"kubernetes.io/projected/e43cbed3-aadc-4991-ac85-da6f90829b28-kube-api-access-8fgpt\") pod \"crc-debug-5jb2q\" (UID: \"e43cbed3-aadc-4991-ac85-da6f90829b28\") " pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.732640 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e43cbed3-aadc-4991-ac85-da6f90829b28-host\") pod \"crc-debug-5jb2q\" (UID: \"e43cbed3-aadc-4991-ac85-da6f90829b28\") " pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.747598 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fgpt\" (UniqueName: \"kubernetes.io/projected/e43cbed3-aadc-4991-ac85-da6f90829b28-kube-api-access-8fgpt\") pod \"crc-debug-5jb2q\" (UID: \"e43cbed3-aadc-4991-ac85-da6f90829b28\") " pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:21 crc kubenswrapper[5058]: I1014 10:12:21.861093 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:22 crc kubenswrapper[5058]: I1014 10:12:22.132127 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" event={"ID":"e43cbed3-aadc-4991-ac85-da6f90829b28","Type":"ContainerStarted","Data":"5b1165aaa993c3684343e2051b3ee57764fe5c8fd38758663f163b8303db6538"} Oct 14 10:12:22 crc kubenswrapper[5058]: I1014 10:12:22.132170 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" event={"ID":"e43cbed3-aadc-4991-ac85-da6f90829b28","Type":"ContainerStarted","Data":"1f3051373cbf3a5474dcd718acea1e360f28c17997e341cad8d25ecd09c52992"} Oct 14 10:12:22 crc kubenswrapper[5058]: I1014 10:12:22.149417 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" podStartSLOduration=1.149399361 podStartE2EDuration="1.149399361s" podCreationTimestamp="2025-10-14 10:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 10:12:22.145219552 +0000 UTC m=+12290.056303358" watchObservedRunningTime="2025-10-14 10:12:22.149399361 +0000 UTC m=+12290.060483167" Oct 14 10:12:23 crc kubenswrapper[5058]: I1014 10:12:23.144740 5058 generic.go:334] "Generic (PLEG): container finished" podID="e43cbed3-aadc-4991-ac85-da6f90829b28" containerID="5b1165aaa993c3684343e2051b3ee57764fe5c8fd38758663f163b8303db6538" exitCode=1 Oct 14 10:12:23 crc kubenswrapper[5058]: I1014 10:12:23.144843 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" event={"ID":"e43cbed3-aadc-4991-ac85-da6f90829b28","Type":"ContainerDied","Data":"5b1165aaa993c3684343e2051b3ee57764fe5c8fd38758663f163b8303db6538"} Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.276126 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.316594 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jb8bt/crc-debug-5jb2q"] Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.330176 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jb8bt/crc-debug-5jb2q"] Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.384481 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fgpt\" (UniqueName: \"kubernetes.io/projected/e43cbed3-aadc-4991-ac85-da6f90829b28-kube-api-access-8fgpt\") pod \"e43cbed3-aadc-4991-ac85-da6f90829b28\" (UID: \"e43cbed3-aadc-4991-ac85-da6f90829b28\") " Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.384580 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e43cbed3-aadc-4991-ac85-da6f90829b28-host\") pod \"e43cbed3-aadc-4991-ac85-da6f90829b28\" (UID: \"e43cbed3-aadc-4991-ac85-da6f90829b28\") " Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.384864 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e43cbed3-aadc-4991-ac85-da6f90829b28-host" (OuterVolumeSpecName: "host") pod "e43cbed3-aadc-4991-ac85-da6f90829b28" (UID: "e43cbed3-aadc-4991-ac85-da6f90829b28"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.385279 5058 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e43cbed3-aadc-4991-ac85-da6f90829b28-host\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.390173 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43cbed3-aadc-4991-ac85-da6f90829b28-kube-api-access-8fgpt" (OuterVolumeSpecName: "kube-api-access-8fgpt") pod "e43cbed3-aadc-4991-ac85-da6f90829b28" (UID: "e43cbed3-aadc-4991-ac85-da6f90829b28"). InnerVolumeSpecName "kube-api-access-8fgpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.487561 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fgpt\" (UniqueName: \"kubernetes.io/projected/e43cbed3-aadc-4991-ac85-da6f90829b28-kube-api-access-8fgpt\") on node \"crc\" DevicePath \"\"" Oct 14 10:12:24 crc kubenswrapper[5058]: I1014 10:12:24.804985 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43cbed3-aadc-4991-ac85-da6f90829b28" path="/var/lib/kubelet/pods/e43cbed3-aadc-4991-ac85-da6f90829b28/volumes" Oct 14 10:12:25 crc kubenswrapper[5058]: I1014 10:12:25.169977 5058 scope.go:117] "RemoveContainer" containerID="5b1165aaa993c3684343e2051b3ee57764fe5c8fd38758663f163b8303db6538" Oct 14 10:12:25 crc kubenswrapper[5058]: I1014 10:12:25.170083 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/crc-debug-5jb2q" Oct 14 10:12:27 crc kubenswrapper[5058]: I1014 10:12:27.790194 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:12:27 crc kubenswrapper[5058]: E1014 10:12:27.790822 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:12:40 crc kubenswrapper[5058]: I1014 10:12:40.789810 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:12:40 crc kubenswrapper[5058]: E1014 10:12:40.790567 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:12:53 crc kubenswrapper[5058]: I1014 10:12:53.790591 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:12:53 crc kubenswrapper[5058]: E1014 10:12:53.791247 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:13:07 crc kubenswrapper[5058]: I1014 10:13:07.263302 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_b3f02b3a-74f4-4e01-ba89-48f0d8c48dac/init-config-reloader/0.log" Oct 14 10:13:07 crc kubenswrapper[5058]: I1014 10:13:07.469782 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_b3f02b3a-74f4-4e01-ba89-48f0d8c48dac/config-reloader/0.log" Oct 14 10:13:07 crc kubenswrapper[5058]: I1014 10:13:07.484299 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_b3f02b3a-74f4-4e01-ba89-48f0d8c48dac/init-config-reloader/0.log" Oct 14 10:13:07 crc kubenswrapper[5058]: I1014 10:13:07.528189 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_b3f02b3a-74f4-4e01-ba89-48f0d8c48dac/alertmanager/0.log" Oct 14 10:13:07 crc kubenswrapper[5058]: I1014 10:13:07.679177 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_349423a8-3dec-46c0-b56a-ae58820cc454/aodh-api/0.log" Oct 14 10:13:07 crc kubenswrapper[5058]: I1014 10:13:07.764040 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_349423a8-3dec-46c0-b56a-ae58820cc454/aodh-evaluator/0.log" Oct 14 10:13:07 crc kubenswrapper[5058]: I1014 10:13:07.790354 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:13:07 crc kubenswrapper[5058]: E1014 10:13:07.790608 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:13:07 crc kubenswrapper[5058]: I1014 10:13:07.854920 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_349423a8-3dec-46c0-b56a-ae58820cc454/aodh-listener/0.log" Oct 14 10:13:07 crc kubenswrapper[5058]: I1014 10:13:07.916233 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_349423a8-3dec-46c0-b56a-ae58820cc454/aodh-notifier/0.log" Oct 14 10:13:08 crc kubenswrapper[5058]: I1014 10:13:08.053205 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-786cbd49b4-wws7f_6858f89d-6b3d-4388-a644-7a62e04bc9ae/barbican-api/0.log" Oct 14 10:13:08 crc kubenswrapper[5058]: I1014 10:13:08.130646 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-786cbd49b4-wws7f_6858f89d-6b3d-4388-a644-7a62e04bc9ae/barbican-api-log/0.log" Oct 14 10:13:08 crc kubenswrapper[5058]: I1014 10:13:08.265786 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5dfd989b84-vpnc9_245c0703-016a-4a7c-8bae-f665b29e0c64/barbican-keystone-listener/0.log" Oct 14 10:13:08 crc kubenswrapper[5058]: I1014 10:13:08.504280 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-fb9b5d689-qhv8k_11c953fc-71cc-44c2-8e95-1219f40a82ff/barbican-worker/0.log" Oct 14 10:13:08 crc kubenswrapper[5058]: I1014 10:13:08.751630 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-fb9b5d689-qhv8k_11c953fc-71cc-44c2-8e95-1219f40a82ff/barbican-worker-log/0.log" Oct 14 10:13:08 crc kubenswrapper[5058]: I1014 10:13:08.758242 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5dfd989b84-vpnc9_245c0703-016a-4a7c-8bae-f665b29e0c64/barbican-keystone-listener-log/0.log" Oct 14 10:13:08 crc kubenswrapper[5058]: I1014 10:13:08.992400 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-p5ck7_cabd592e-2738-4150-9e93-31f1678307bc/bootstrap-openstack-openstack-cell1/0.log" Oct 14 10:13:08 crc kubenswrapper[5058]: I1014 10:13:08.997183 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell2-dndm2_f4e4eb3f-93d2-425e-b655-ec4a3c2649bb/bootstrap-openstack-openstack-cell2/0.log" Oct 14 10:13:09 crc kubenswrapper[5058]: I1014 10:13:09.190983 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c1a6c29d-5dfc-4217-86d4-e4ea9618176b/ceilometer-central-agent/0.log" Oct 14 10:13:09 crc kubenswrapper[5058]: I1014 10:13:09.207269 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c1a6c29d-5dfc-4217-86d4-e4ea9618176b/ceilometer-notification-agent/0.log" Oct 14 10:13:09 crc kubenswrapper[5058]: I1014 10:13:09.397740 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c1a6c29d-5dfc-4217-86d4-e4ea9618176b/proxy-httpd/0.log" Oct 14 10:13:09 crc kubenswrapper[5058]: I1014 10:13:09.404501 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c1a6c29d-5dfc-4217-86d4-e4ea9618176b/sg-core/0.log" Oct 14 10:13:09 crc kubenswrapper[5058]: I1014 10:13:09.672418 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ef61a77-84c0-426d-b452-11801332fcf4/cinder-api-log/0.log" Oct 14 10:13:09 crc kubenswrapper[5058]: I1014 10:13:09.687618 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ef61a77-84c0-426d-b452-11801332fcf4/cinder-api/0.log" Oct 14 10:13:09 crc kubenswrapper[5058]: I1014 10:13:09.900483 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250/cinder-scheduler/0.log" Oct 14 10:13:09 crc kubenswrapper[5058]: I1014 10:13:09.962616 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5b6aa6ec-ab73-40f1-a38e-a42d1ba7c250/probe/0.log" Oct 14 10:13:10 crc kubenswrapper[5058]: I1014 10:13:10.092286 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-rvbrk_9a063aae-0e40-4f4c-9f59-08ff5c699706/configure-network-openstack-openstack-cell1/0.log" Oct 14 10:13:10 crc kubenswrapper[5058]: I1014 10:13:10.253149 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell2-lgkqn_0067ccfb-d0f9-4b48-bcc6-73f70a2e206b/configure-network-openstack-openstack-cell2/0.log" Oct 14 10:13:10 crc kubenswrapper[5058]: I1014 10:13:10.462515 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-5bpns_eb3265ea-5f4a-453d-ad52-b9e735a5c693/configure-os-openstack-openstack-cell1/0.log" Oct 14 10:13:10 crc kubenswrapper[5058]: I1014 10:13:10.604296 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-ncjhg_67663dcb-1534-4133-a168-02fcde69c9b0/configure-os-openstack-openstack-cell1/0.log" Oct 14 10:13:10 crc kubenswrapper[5058]: I1014 10:13:10.835511 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell2-j89ml_a6cd75bd-8b9a-4ee4-b2b8-990f4c982d10/configure-os-openstack-openstack-cell2/0.log" Oct 14 10:13:10 crc kubenswrapper[5058]: I1014 10:13:10.923404 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell2-zc4mc_63658dc1-97e3-43d3-9bf8-717180f1aeca/configure-os-openstack-openstack-cell2/0.log" Oct 14 10:13:11 crc kubenswrapper[5058]: I1014 10:13:11.071539 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77df87b869-k4fps_fb5192ff-af74-4af8-badc-d59affd44082/init/0.log" Oct 14 10:13:11 crc kubenswrapper[5058]: I1014 10:13:11.499243 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77df87b869-k4fps_fb5192ff-af74-4af8-badc-d59affd44082/init/0.log" Oct 14 10:13:11 crc kubenswrapper[5058]: I1014 10:13:11.549997 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-qln66_e3cc6fd2-28bd-4dca-b9c9-6661bb91f5df/download-cache-openstack-openstack-cell1/0.log" Oct 14 10:13:11 crc kubenswrapper[5058]: I1014 10:13:11.610550 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77df87b869-k4fps_fb5192ff-af74-4af8-badc-d59affd44082/dnsmasq-dns/0.log" Oct 14 10:13:11 crc kubenswrapper[5058]: I1014 10:13:11.750679 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell2-q5n4f_fcd39a4d-1aa8-41e4-8c0d-7ab02860dad7/download-cache-openstack-openstack-cell2/0.log" Oct 14 10:13:11 crc kubenswrapper[5058]: I1014 10:13:11.878050 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_53779db0-8c45-4fac-bee5-4ed03edabbb5/glance-httpd/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.025396 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_53779db0-8c45-4fac-bee5-4ed03edabbb5/glance-log/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.156207 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3/glance-log/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.174845 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_22d5fc38-a61f-494a-ae5b-6cc9dc35f6c3/glance-httpd/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.336587 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5584c58558-xmfqb_e6661264-a713-43c4-89da-c3c07c28799b/heat-api/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.506549 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-d84d4c977-rr8kv_b249517c-6c1a-4820-8f44-428350239048/heat-cfnapi/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.557381 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6b4df585c6-lftwf_c1f909eb-bccb-4af2-ac36-13b1f00e3a33/heat-engine/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.788710 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-586f669495-t2sn2_cdabbf6e-8a7a-4907-866c-e230606ad5a7/horizon/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.846517 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-586f669495-t2sn2_cdabbf6e-8a7a-4907-866c-e230606ad5a7/horizon-log/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.947340 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-spdc7_a16fc909-2fe0-43a2-96b1-ae60b842e473/install-certs-openstack-openstack-cell1/0.log" Oct 14 10:13:12 crc kubenswrapper[5058]: I1014 10:13:12.993705 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell2-nkr9n_103b3718-026c-4c76-b31a-9514db89eb0d/install-certs-openstack-openstack-cell2/0.log" Oct 14 10:13:13 crc kubenswrapper[5058]: I1014 10:13:13.181402 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-sthwr_25f8c470-b427-4e78-a4b3-80e6fe632075/install-os-openstack-openstack-cell1/0.log" Oct 14 10:13:13 crc kubenswrapper[5058]: I1014 10:13:13.252098 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell2-bfxmn_38f7f7fa-b19d-445f-bf0a-97633b0234bb/install-os-openstack-openstack-cell2/0.log" Oct 14 10:13:13 crc kubenswrapper[5058]: I1014 10:13:13.439544 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29340541-fmsll_11a1a79e-ed50-405d-aada-2b4b6a280c98/keystone-cron/0.log" Oct 14 10:13:13 crc kubenswrapper[5058]: I1014 10:13:13.647292 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29340601-dzqtn_1b1c6644-e9a1-4cbd-8095-debc60135fe5/keystone-cron/0.log" Oct 14 10:13:13 crc kubenswrapper[5058]: I1014 10:13:13.829711 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_5cc6f5bd-2c4f-418b-bf74-97db3a201df6/kube-state-metrics/0.log" Oct 14 10:13:13 crc kubenswrapper[5058]: I1014 10:13:13.992556 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-64c6487b4d-qstgs_68efc181-c994-47af-a145-385f9670b8b8/keystone-api/0.log" Oct 14 10:13:14 crc kubenswrapper[5058]: I1014 10:13:14.072219 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-7wfsc_7e70a9ec-db2c-4945-823a-1392efdabd0e/libvirt-openstack-openstack-cell1/0.log" Oct 14 10:13:14 crc kubenswrapper[5058]: I1014 10:13:14.231792 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell2-nblb9_b82ae322-0931-45e2-9b71-f6be598d8245/libvirt-openstack-openstack-cell2/0.log" Oct 14 10:13:14 crc kubenswrapper[5058]: I1014 10:13:14.751019 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7df8c6fbcc-lbcbf_e1e202a6-e40e-4322-b8b3-5d4594b1023e/neutron-httpd/0.log" Oct 14 10:13:15 crc kubenswrapper[5058]: I1014 10:13:15.083695 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7df8c6fbcc-lbcbf_e1e202a6-e40e-4322-b8b3-5d4594b1023e/neutron-api/0.log" Oct 14 10:13:15 crc kubenswrapper[5058]: I1014 10:13:15.301728 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell2-bff7d_17ff4734-b12e-40cd-8e59-38d648ea5e9e/neutron-dhcp-openstack-openstack-cell2/0.log" Oct 14 10:13:15 crc kubenswrapper[5058]: I1014 10:13:15.332327 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-4srrf_20a08639-e660-434d-89c5-581137866521/neutron-dhcp-openstack-openstack-cell1/0.log" Oct 14 10:13:15 crc kubenswrapper[5058]: I1014 10:13:15.556640 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-tzszb_9c1079ae-4815-4773-a5e3-84e532ed66db/neutron-metadata-openstack-openstack-cell1/0.log" Oct 14 10:13:15 crc kubenswrapper[5058]: I1014 10:13:15.831468 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell2-fwvnf_f168337a-a9d3-4bb4-b7ef-306370917b3f/neutron-metadata-openstack-openstack-cell2/0.log" Oct 14 10:13:15 crc kubenswrapper[5058]: I1014 10:13:15.845650 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-bd2pj_13b04ee2-dd54-4eb5-ba47-485bf575545c/neutron-sriov-openstack-openstack-cell1/0.log" Oct 14 10:13:16 crc kubenswrapper[5058]: I1014 10:13:16.085246 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell2-dz9nm_b564386b-d09e-432c-aff9-00e4c40f8aac/neutron-sriov-openstack-openstack-cell2/0.log" Oct 14 10:13:16 crc kubenswrapper[5058]: I1014 10:13:16.483019 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a005d70d-1142-4257-8095-6aa5023ed28d/nova-api-api/0.log" Oct 14 10:13:16 crc kubenswrapper[5058]: I1014 10:13:16.686195 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a005d70d-1142-4257-8095-6aa5023ed28d/nova-api-log/0.log" Oct 14 10:13:16 crc kubenswrapper[5058]: I1014 10:13:16.760228 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_283127b1-9cc1-47f1-80a2-551566f93544/nova-cell0-conductor-conductor/0.log" Oct 14 10:13:17 crc kubenswrapper[5058]: I1014 10:13:17.031196 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ec35f4ba-510f-4665-8f2c-3ed0bb2b91d8/nova-cell1-conductor-conductor/0.log" Oct 14 10:13:17 crc kubenswrapper[5058]: I1014 10:13:17.160086 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c6c32740-d479-4c2b-91ad-60bd6955ce11/nova-cell1-novncproxy-novncproxy/0.log" Oct 14 10:13:17 crc kubenswrapper[5058]: I1014 10:13:17.312942 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellc6xzr_2c20d114-97e4-4d3e-a7a0-40034e38b794/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Oct 14 10:13:17 crc kubenswrapper[5058]: I1014 10:13:17.576814 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-vf28n_465814e2-5b61-4d58-bdde-96e74d43768a/nova-cell1-openstack-openstack-cell1/0.log" Oct 14 10:13:17 crc kubenswrapper[5058]: I1014 10:13:17.753948 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell2-conductor-0_78bf9d98-9166-4c93-adfb-f327f12f6d09/nova-cell2-conductor-conductor/0.log" Oct 14 10:13:17 crc kubenswrapper[5058]: I1014 10:13:17.948676 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell2-novncproxy-0_25111cea-43f8-465b-a3f4-33933be1b593/nova-cell2-novncproxy-novncproxy/0.log" Oct 14 10:13:18 crc kubenswrapper[5058]: I1014 10:13:18.210738 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cellc4jj7_5a0193e1-89dc-4ce4-a87c-a16916218a8a/nova-cell2-openstack-nova-compute-ffu-cell2-openstack-cell2/0.log" Oct 14 10:13:18 crc kubenswrapper[5058]: I1014 10:13:18.331050 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell2-openstack-openstack-cell2-j2jsg_42904e5c-c197-40d7-bf0d-157c20982f80/nova-cell2-openstack-openstack-cell2/0.log" Oct 14 10:13:18 crc kubenswrapper[5058]: I1014 10:13:18.522945 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell3-conductor-0_01281f7e-5368-4ab7-be69-fcfed8f7a207/nova-cell3-conductor-conductor/0.log" Oct 14 10:13:18 crc kubenswrapper[5058]: I1014 10:13:18.959025 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell3-novncproxy-0_3e9a43ee-d5b2-4abd-9daf-e37c88284049/nova-cell3-novncproxy-novncproxy/0.log" Oct 14 10:13:19 crc kubenswrapper[5058]: I1014 10:13:19.155096 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_21e9a657-7533-48bf-9ff4-a12185627a61/nova-metadata-log/0.log" Oct 14 10:13:19 crc kubenswrapper[5058]: I1014 10:13:19.266973 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_21e9a657-7533-48bf-9ff4-a12185627a61/nova-metadata-metadata/0.log" Oct 14 10:13:19 crc kubenswrapper[5058]: I1014 10:13:19.482568 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3da7def0-22f7-4151-95ff-dbf540934e38/memcached/0.log" Oct 14 10:13:19 crc kubenswrapper[5058]: I1014 10:13:19.516918 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_284bfce7-71a1-418e-a39d-9636fa58aec6/mysql-bootstrap/0.log" Oct 14 10:13:19 crc kubenswrapper[5058]: I1014 10:13:19.556988 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e07126c4-2f3b-4001-8e77-5a7555836354/nova-scheduler-scheduler/0.log" Oct 14 10:13:19 crc kubenswrapper[5058]: I1014 10:13:19.718007 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_284bfce7-71a1-418e-a39d-9636fa58aec6/galera/0.log" Oct 14 10:13:19 crc kubenswrapper[5058]: I1014 10:13:19.777371 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_284bfce7-71a1-418e-a39d-9636fa58aec6/mysql-bootstrap/0.log" Oct 14 10:13:19 crc kubenswrapper[5058]: I1014 10:13:19.785223 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell2-galera-0_4947a1e1-3c94-4e63-ae34-1a1540c3f121/mysql-bootstrap/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.038568 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell2-galera-0_4947a1e1-3c94-4e63-ae34-1a1540c3f121/galera/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.054462 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell3-galera-0_aca65ca2-d9c7-4553-9ebe-5a4a9f22a040/mysql-bootstrap/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.061025 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell2-galera-0_4947a1e1-3c94-4e63-ae34-1a1540c3f121/mysql-bootstrap/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.247493 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell3-galera-0_aca65ca2-d9c7-4553-9ebe-5a4a9f22a040/mysql-bootstrap/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.262622 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_654ae580-516a-4b85-a9e8-39ebff319e12/mysql-bootstrap/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.288318 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell3-galera-0_aca65ca2-d9c7-4553-9ebe-5a4a9f22a040/galera/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.459101 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_654ae580-516a-4b85-a9e8-39ebff319e12/mysql-bootstrap/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.474720 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_654ae580-516a-4b85-a9e8-39ebff319e12/galera/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.556700 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_11b87d3e-44ac-4c85-a244-699d47ba80fa/openstackclient/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.674579 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d1b5b14-b87d-4e85-a84c-0e632336be97/openstack-network-exporter/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.797782 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d1b5b14-b87d-4e85-a84c-0e632336be97/ovn-northd/0.log" Oct 14 10:13:20 crc kubenswrapper[5058]: I1014 10:13:20.903168 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-9gfxk_41b07bed-d3c4-40f7-bbb8-4d487bea3c5c/ovn-openstack-openstack-cell1/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.045325 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell2-lsmm7_b6c2dca5-aca4-43a8-afd2-005e5c71f8e8/ovn-openstack-openstack-cell2/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.109225 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fff994c4-4220-4da1-8c5d-dbaf65e2b117/openstack-network-exporter/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.182834 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fff994c4-4220-4da1-8c5d-dbaf65e2b117/ovsdbserver-nb/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.345240 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_bf95d4f8-e496-480a-8201-48da4b99973b/openstack-network-exporter/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.360611 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_bf95d4f8-e496-480a-8201-48da4b99973b/ovsdbserver-nb/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.475663 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_107587cc-f1ce-4990-9d6b-6140322ba805/openstack-network-exporter/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.543929 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_107587cc-f1ce-4990-9d6b-6140322ba805/ovsdbserver-nb/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.695427 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_047c2989-508f-4847-afa8-67c09f45bc92/openstack-network-exporter/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.704200 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_047c2989-508f-4847-afa8-67c09f45bc92/ovsdbserver-sb/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.780328 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_1c95e379-00e4-475f-9bcb-841be048de5a/openstack-network-exporter/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.790706 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:13:21 crc kubenswrapper[5058]: E1014 10:13:21.791261 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.931714 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_1c95e379-00e4-475f-9bcb-841be048de5a/ovsdbserver-sb/0.log" Oct 14 10:13:21 crc kubenswrapper[5058]: I1014 10:13:21.987778 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_2a475911-a9eb-4305-9abc-6f5cedb592b6/openstack-network-exporter/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.068433 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_2a475911-a9eb-4305-9abc-6f5cedb592b6/ovsdbserver-sb/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.261548 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-667758b98d-p65v6_fe935e0a-7cdb-4711-b7f8-6e8315568e92/placement-api/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.329255 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-667758b98d-p65v6_fe935e0a-7cdb-4711-b7f8-6e8315568e92/placement-log/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.379675 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c2nf56_8e3244f3-1a76-4003-8e1a-62b390032bd2/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.541985 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cp7kk4_9138e077-43e6-4e16-b9c9-83b3c807b456/pre-adoption-validation-openstack-pre-adoption-openstack-cell2/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.601047 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4d38fa47-7bea-4cd8-be24-67ed0f250750/init-config-reloader/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.740502 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4d38fa47-7bea-4cd8-be24-67ed0f250750/init-config-reloader/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.744835 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4d38fa47-7bea-4cd8-be24-67ed0f250750/config-reloader/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.759134 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4d38fa47-7bea-4cd8-be24-67ed0f250750/prometheus/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.778280 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4d38fa47-7bea-4cd8-be24-67ed0f250750/thanos-sidecar/0.log" Oct 14 10:13:22 crc kubenswrapper[5058]: I1014 10:13:22.928725 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2/setup-container/0.log" Oct 14 10:13:23 crc kubenswrapper[5058]: I1014 10:13:23.104488 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2/setup-container/0.log" Oct 14 10:13:23 crc kubenswrapper[5058]: I1014 10:13:23.173527 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c0ca5fe1-feeb-4f89-ab67-de2fcfa4d8c2/rabbitmq/0.log" Oct 14 10:13:23 crc kubenswrapper[5058]: I1014 10:13:23.219814 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell2-server-0_552c2732-565b-4a0b-bbc2-64ca8e832310/setup-container/0.log" Oct 14 10:13:23 crc kubenswrapper[5058]: I1014 10:13:23.611372 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell2-server-0_552c2732-565b-4a0b-bbc2-64ca8e832310/setup-container/0.log" Oct 14 10:13:23 crc kubenswrapper[5058]: I1014 10:13:23.621957 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell3-server-0_3001759e-a841-4846-8879-5e8fc14f1f5e/setup-container/0.log" Oct 14 10:13:23 crc kubenswrapper[5058]: I1014 10:13:23.623982 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell2-server-0_552c2732-565b-4a0b-bbc2-64ca8e832310/rabbitmq/0.log" Oct 14 10:13:23 crc kubenswrapper[5058]: I1014 10:13:23.916237 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell3-server-0_3001759e-a841-4846-8879-5e8fc14f1f5e/rabbitmq/0.log" Oct 14 10:13:23 crc kubenswrapper[5058]: I1014 10:13:23.916829 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell3-server-0_3001759e-a841-4846-8879-5e8fc14f1f5e/setup-container/0.log" Oct 14 10:13:23 crc kubenswrapper[5058]: I1014 10:13:23.946500 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cf35a4e3-bdb0-4a3e-802c-46f70875a887/setup-container/0.log" Oct 14 10:13:24 crc kubenswrapper[5058]: I1014 10:13:24.145916 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cf35a4e3-bdb0-4a3e-802c-46f70875a887/rabbitmq/0.log" Oct 14 10:13:24 crc kubenswrapper[5058]: I1014 10:13:24.159063 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cf35a4e3-bdb0-4a3e-802c-46f70875a887/setup-container/0.log" Oct 14 10:13:24 crc kubenswrapper[5058]: I1014 10:13:24.187949 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-5dg8j_aa3e25fa-c06a-408f-9f2a-cce6d3c5026a/reboot-os-openstack-openstack-cell1/0.log" Oct 14 10:13:24 crc kubenswrapper[5058]: I1014 10:13:24.343075 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell2-sst57_b0d0e557-c74b-47f3-9089-4ff59ca5ab96/reboot-os-openstack-openstack-cell2/0.log" Oct 14 10:13:24 crc kubenswrapper[5058]: I1014 10:13:24.408694 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-k65r8_98754677-6b19-4bca-abef-380800bf88ac/run-os-openstack-openstack-cell1/0.log" Oct 14 10:13:24 crc kubenswrapper[5058]: I1014 10:13:24.566832 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell2-qqsbb_b381c5b6-134c-4895-9594-c62154835cbb/run-os-openstack-openstack-cell2/0.log" Oct 14 10:13:24 crc kubenswrapper[5058]: I1014 10:13:24.648165 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-n2d66_dd8f52ac-f61f-4197-be85-6a78e8e27159/ssh-known-hosts-openstack/0.log" Oct 14 10:13:24 crc kubenswrapper[5058]: I1014 10:13:24.861072 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-86cccbdf6-gvj7b_032ce0df-73cb-4f14-89ea-400cb13781a5/proxy-server/0.log" Oct 14 10:13:25 crc kubenswrapper[5058]: I1014 10:13:25.025400 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-86cccbdf6-gvj7b_032ce0df-73cb-4f14-89ea-400cb13781a5/proxy-httpd/0.log" Oct 14 10:13:25 crc kubenswrapper[5058]: I1014 10:13:25.027357 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2nwh9_d8286808-dcf6-4843-9278-43e3983998ae/swift-ring-rebalance/0.log" Oct 14 10:13:25 crc kubenswrapper[5058]: I1014 10:13:25.172961 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-lcqlj_b017bff1-afde-427d-b072-6adef53f7daf/telemetry-openstack-openstack-cell1/0.log" Oct 14 10:13:25 crc kubenswrapper[5058]: I1014 10:13:25.316337 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell2-7vqj9_ba866ea4-9da5-4c06-9832-80094dd123a3/telemetry-openstack-openstack-cell2/0.log" Oct 14 10:13:25 crc kubenswrapper[5058]: I1014 10:13:25.421767 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_e68b0860-bc9e-43b0-a2c5-4fd969d4149a/tempest-tests-tempest-tests-runner/0.log" Oct 14 10:13:25 crc kubenswrapper[5058]: I1014 10:13:25.533128 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b20631a1-da6f-4256-8ec2-15517ec5767a/test-operator-logs-container/0.log" Oct 14 10:13:25 crc kubenswrapper[5058]: I1014 10:13:25.599422 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-xxkc2_4e159776-52a9-419f-936f-4a921173dd04/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Oct 14 10:13:25 crc kubenswrapper[5058]: I1014 10:13:25.733308 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell2-r52n4_f0c0452c-9556-4225-89fd-d5a07ad5fbfa/tripleo-cleanup-tripleo-cleanup-openstack-cell2/0.log" Oct 14 10:13:25 crc kubenswrapper[5058]: I1014 10:13:25.826042 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-wftx7_3b81f53f-5f91-41a5-b84e-8de887356308/validate-network-openstack-openstack-cell1/0.log" Oct 14 10:13:26 crc kubenswrapper[5058]: I1014 10:13:26.007537 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell2-bfkb2_1dc66c44-6583-4165-9d04-012b1fed763a/validate-network-openstack-openstack-cell2/0.log" Oct 14 10:13:35 crc kubenswrapper[5058]: I1014 10:13:35.789570 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:13:35 crc kubenswrapper[5058]: E1014 10:13:35.790216 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:13:46 crc kubenswrapper[5058]: I1014 10:13:46.789935 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:13:46 crc kubenswrapper[5058]: E1014 10:13:46.790891 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:13:48 crc kubenswrapper[5058]: I1014 10:13:48.588205 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6_f8113d5b-0ab3-4158-9f25-905f7d658e5c/util/0.log" Oct 14 10:13:48 crc kubenswrapper[5058]: I1014 10:13:48.743477 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6_f8113d5b-0ab3-4158-9f25-905f7d658e5c/util/0.log" Oct 14 10:13:48 crc kubenswrapper[5058]: I1014 10:13:48.786336 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6_f8113d5b-0ab3-4158-9f25-905f7d658e5c/pull/0.log" Oct 14 10:13:48 crc kubenswrapper[5058]: I1014 10:13:48.791178 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6_f8113d5b-0ab3-4158-9f25-905f7d658e5c/pull/0.log" Oct 14 10:13:48 crc kubenswrapper[5058]: I1014 10:13:48.983907 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6_f8113d5b-0ab3-4158-9f25-905f7d658e5c/pull/0.log" Oct 14 10:13:48 crc kubenswrapper[5058]: I1014 10:13:48.988408 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6_f8113d5b-0ab3-4158-9f25-905f7d658e5c/util/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.006742 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ah76v6_f8113d5b-0ab3-4158-9f25-905f7d658e5c/extract/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.185397 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-658bdf4b74-ktwj8_182bf838-8aa5-488a-80cb-b0b5d45b8503/kube-rbac-proxy/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.258140 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7b7fb68549-tdlj7_e4f8d193-5863-4d60-9c5a-8596e64dcdc5/kube-rbac-proxy/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.301271 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-658bdf4b74-ktwj8_182bf838-8aa5-488a-80cb-b0b5d45b8503/manager/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.466222 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-85d5d9dd78-rzjp4_eb904271-90a2-4e14-8034-230236c0bbda/kube-rbac-proxy/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.466232 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7b7fb68549-tdlj7_e4f8d193-5863-4d60-9c5a-8596e64dcdc5/manager/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.496108 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-85d5d9dd78-rzjp4_eb904271-90a2-4e14-8034-230236c0bbda/manager/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.660009 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84b9b84486-b6kh5_f25c3017-c824-47f3-add1-9bd90d8c6d69/kube-rbac-proxy/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.820615 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84b9b84486-b6kh5_f25c3017-c824-47f3-add1-9bd90d8c6d69/manager/0.log" Oct 14 10:13:49 crc kubenswrapper[5058]: I1014 10:13:49.849137 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-858f76bbdd-8sbg9_d3e72aae-75a1-4af7-8db5-2962735f6e86/kube-rbac-proxy/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.010788 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-858f76bbdd-8sbg9_d3e72aae-75a1-4af7-8db5-2962735f6e86/manager/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.057966 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7ffbcb7588-c5sjr_88b4ba9d-c49f-47e2-aff8-c4e766d25bcc/kube-rbac-proxy/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.088966 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7ffbcb7588-c5sjr_88b4ba9d-c49f-47e2-aff8-c4e766d25bcc/manager/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.199469 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-656bcbd775-8tmms_871cbb71-8b23-4e59-958f-436fb42cf771/kube-rbac-proxy/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.411756 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9c5c78d49-2gj7w_b59421ab-44ca-48a0-945b-f6c42153397d/manager/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.420565 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9c5c78d49-2gj7w_b59421ab-44ca-48a0-945b-f6c42153397d/kube-rbac-proxy/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.596016 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-656bcbd775-8tmms_871cbb71-8b23-4e59-958f-436fb42cf771/manager/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.649220 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55b6b7c7b8-sv4jh_67131cbe-92c7-4051-a27b-1ed2527b6b04/kube-rbac-proxy/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.780596 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55b6b7c7b8-sv4jh_67131cbe-92c7-4051-a27b-1ed2527b6b04/manager/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.831081 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5f67fbc655-lsznt_096b5116-c25a-4443-b8c0-af75f2b92262/kube-rbac-proxy/0.log" Oct 14 10:13:50 crc kubenswrapper[5058]: I1014 10:13:50.891199 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5f67fbc655-lsznt_096b5116-c25a-4443-b8c0-af75f2b92262/manager/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.001201 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f9fb45f8f-p27j7_a005a6bf-bd52-4d64-9895-08889634f350/kube-rbac-proxy/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.061654 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f9fb45f8f-p27j7_a005a6bf-bd52-4d64-9895-08889634f350/manager/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.169539 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-79d585cb66-j8qpq_156130c4-7aa0-4acd-b91b-d69ab4c86747/kube-rbac-proxy/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.256574 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-79d585cb66-j8qpq_156130c4-7aa0-4acd-b91b-d69ab4c86747/manager/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.357215 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5df598886f-hs54l_cf76dba6-ea6c-4e07-9261-0c960bbae828/kube-rbac-proxy/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.453098 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69fdcfc5f5-nggc9_8d419868-4916-4ad0-bb8b-b75dec32fd07/kube-rbac-proxy/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.622912 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69fdcfc5f5-nggc9_8d419868-4916-4ad0-bb8b-b75dec32fd07/manager/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.654371 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5df598886f-hs54l_cf76dba6-ea6c-4e07-9261-0c960bbae828/manager/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.725820 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55b7d44848jxd9b_b5cb6101-1d0d-45af-9a55-09c33013c269/kube-rbac-proxy/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.874145 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55b7d44848jxd9b_b5cb6101-1d0d-45af-9a55-09c33013c269/manager/0.log" Oct 14 10:13:51 crc kubenswrapper[5058]: I1014 10:13:51.887390 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7fb8c88b76-t9bzz_b0d91fa4-f695-4d4d-8a83-22fb8d5a2660/kube-rbac-proxy/0.log" Oct 14 10:13:52 crc kubenswrapper[5058]: I1014 10:13:52.104550 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-64895cd698-zp5kp_364b619b-fb9b-4f5c-97ea-61c43987ea42/kube-rbac-proxy/0.log" Oct 14 10:13:52 crc kubenswrapper[5058]: I1014 10:13:52.277362 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-64895cd698-zp5kp_364b619b-fb9b-4f5c-97ea-61c43987ea42/operator/0.log" Oct 14 10:13:52 crc kubenswrapper[5058]: I1014 10:13:52.404587 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-79df5fb58c-mvrtg_7dfea04c-ae28-4df3-addc-f14abc359748/kube-rbac-proxy/0.log" Oct 14 10:13:52 crc kubenswrapper[5058]: I1014 10:13:52.550468 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bc9r8_6bc99329-2101-4c16-ada2-1ab7df1e51ca/registry-server/0.log" Oct 14 10:13:52 crc kubenswrapper[5058]: I1014 10:13:52.573501 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-79df5fb58c-mvrtg_7dfea04c-ae28-4df3-addc-f14abc359748/manager/0.log" Oct 14 10:13:52 crc kubenswrapper[5058]: I1014 10:13:52.630317 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-68b6c87b68-nds65_853f3dc8-d9ec-4220-8190-dd7f5789e3ef/kube-rbac-proxy/0.log" Oct 14 10:13:52 crc kubenswrapper[5058]: I1014 10:13:52.835476 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-hmz6c_4b8f12d6-84f1-4095-817a-bbbfc74bb384/operator/0.log" Oct 14 10:13:52 crc kubenswrapper[5058]: I1014 10:13:52.856587 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-68b6c87b68-nds65_853f3dc8-d9ec-4220-8190-dd7f5789e3ef/manager/0.log" Oct 14 10:13:52 crc kubenswrapper[5058]: I1014 10:13:52.995906 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-db6d7f97b-ndx29_a5666028-58d7-4a84-bdcb-4df88b8bbcfe/kube-rbac-proxy/0.log" Oct 14 10:13:53 crc kubenswrapper[5058]: I1014 10:13:53.068535 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-67cfc6749b-gstgk_287a5a36-ab23-48aa-bce3-c404703271d0/kube-rbac-proxy/0.log" Oct 14 10:13:53 crc kubenswrapper[5058]: I1014 10:13:53.099674 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-db6d7f97b-ndx29_a5666028-58d7-4a84-bdcb-4df88b8bbcfe/manager/0.log" Oct 14 10:13:53 crc kubenswrapper[5058]: I1014 10:13:53.267211 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5458f77c4-2fm9v_0285fda4-68ce-4cf0-bb77-50d2e3a66950/kube-rbac-proxy/0.log" Oct 14 10:13:53 crc kubenswrapper[5058]: I1014 10:13:53.321158 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5458f77c4-2fm9v_0285fda4-68ce-4cf0-bb77-50d2e3a66950/manager/0.log" Oct 14 10:13:53 crc kubenswrapper[5058]: I1014 10:13:53.485480 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-67cfc6749b-gstgk_287a5a36-ab23-48aa-bce3-c404703271d0/manager/0.log" Oct 14 10:13:53 crc kubenswrapper[5058]: I1014 10:13:53.517184 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7f554bff7b-2gfn7_f40617a5-db6e-429b-8665-58fa79067b23/kube-rbac-proxy/0.log" Oct 14 10:13:53 crc kubenswrapper[5058]: I1014 10:13:53.546706 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7f554bff7b-2gfn7_f40617a5-db6e-429b-8665-58fa79067b23/manager/0.log" Oct 14 10:13:54 crc kubenswrapper[5058]: I1014 10:13:54.894132 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7fb8c88b76-t9bzz_b0d91fa4-f695-4d4d-8a83-22fb8d5a2660/manager/0.log" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.577045 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wj7mj"] Oct 14 10:13:59 crc kubenswrapper[5058]: E1014 10:13:59.578379 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43cbed3-aadc-4991-ac85-da6f90829b28" containerName="container-00" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.578398 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43cbed3-aadc-4991-ac85-da6f90829b28" containerName="container-00" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.578678 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43cbed3-aadc-4991-ac85-da6f90829b28" containerName="container-00" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.580551 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.589972 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wj7mj"] Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.677176 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-utilities\") pod \"community-operators-wj7mj\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.677456 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-catalog-content\") pod \"community-operators-wj7mj\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.677603 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzstc\" (UniqueName: \"kubernetes.io/projected/8e105a82-abed-433e-9150-4ef1d8d0ef14-kube-api-access-pzstc\") pod \"community-operators-wj7mj\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.780501 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-utilities\") pod \"community-operators-wj7mj\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.780557 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-catalog-content\") pod \"community-operators-wj7mj\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.780643 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzstc\" (UniqueName: \"kubernetes.io/projected/8e105a82-abed-433e-9150-4ef1d8d0ef14-kube-api-access-pzstc\") pod \"community-operators-wj7mj\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.781014 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-utilities\") pod \"community-operators-wj7mj\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.781293 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-catalog-content\") pod \"community-operators-wj7mj\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.810664 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzstc\" (UniqueName: \"kubernetes.io/projected/8e105a82-abed-433e-9150-4ef1d8d0ef14-kube-api-access-pzstc\") pod \"community-operators-wj7mj\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:13:59 crc kubenswrapper[5058]: I1014 10:13:59.916536 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:14:00 crc kubenswrapper[5058]: I1014 10:14:00.495292 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wj7mj"] Oct 14 10:14:00 crc kubenswrapper[5058]: I1014 10:14:00.790322 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:14:00 crc kubenswrapper[5058]: E1014 10:14:00.790796 5058 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q5fhs_openshift-machine-config-operator(64184db4-5b6d-4aa8-b780-c9f6163af3d8)\"" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" Oct 14 10:14:01 crc kubenswrapper[5058]: I1014 10:14:01.275394 5058 generic.go:334] "Generic (PLEG): container finished" podID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerID="546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4" exitCode=0 Oct 14 10:14:01 crc kubenswrapper[5058]: I1014 10:14:01.275453 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj7mj" event={"ID":"8e105a82-abed-433e-9150-4ef1d8d0ef14","Type":"ContainerDied","Data":"546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4"} Oct 14 10:14:01 crc kubenswrapper[5058]: I1014 10:14:01.275495 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj7mj" event={"ID":"8e105a82-abed-433e-9150-4ef1d8d0ef14","Type":"ContainerStarted","Data":"8fe175ec3d7eb74d9c4c66787fc8b6989e88764759806e73684f66f2342443d7"} Oct 14 10:14:01 crc kubenswrapper[5058]: I1014 10:14:01.277504 5058 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 10:14:03 crc kubenswrapper[5058]: I1014 10:14:03.310924 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj7mj" event={"ID":"8e105a82-abed-433e-9150-4ef1d8d0ef14","Type":"ContainerStarted","Data":"48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70"} Oct 14 10:14:04 crc kubenswrapper[5058]: I1014 10:14:04.326649 5058 generic.go:334] "Generic (PLEG): container finished" podID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerID="48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70" exitCode=0 Oct 14 10:14:04 crc kubenswrapper[5058]: I1014 10:14:04.326748 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj7mj" event={"ID":"8e105a82-abed-433e-9150-4ef1d8d0ef14","Type":"ContainerDied","Data":"48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70"} Oct 14 10:14:05 crc kubenswrapper[5058]: I1014 10:14:05.340234 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj7mj" event={"ID":"8e105a82-abed-433e-9150-4ef1d8d0ef14","Type":"ContainerStarted","Data":"dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69"} Oct 14 10:14:05 crc kubenswrapper[5058]: I1014 10:14:05.364660 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wj7mj" podStartSLOduration=2.832077573 podStartE2EDuration="6.364638383s" podCreationTimestamp="2025-10-14 10:13:59 +0000 UTC" firstStartedPulling="2025-10-14 10:14:01.277301681 +0000 UTC m=+12389.188385487" lastFinishedPulling="2025-10-14 10:14:04.809862491 +0000 UTC m=+12392.720946297" observedRunningTime="2025-10-14 10:14:05.361215376 +0000 UTC m=+12393.272299262" watchObservedRunningTime="2025-10-14 10:14:05.364638383 +0000 UTC m=+12393.275722189" Oct 14 10:14:09 crc kubenswrapper[5058]: I1014 10:14:09.916751 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:14:09 crc kubenswrapper[5058]: I1014 10:14:09.918369 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:14:09 crc kubenswrapper[5058]: I1014 10:14:09.964874 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:14:10 crc kubenswrapper[5058]: I1014 10:14:10.445701 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:14:10 crc kubenswrapper[5058]: I1014 10:14:10.501672 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wj7mj"] Oct 14 10:14:10 crc kubenswrapper[5058]: I1014 10:14:10.604191 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-q7kwd_5fb110f6-ed24-411b-a654-cd7aac0538b6/control-plane-machine-set-operator/0.log" Oct 14 10:14:10 crc kubenswrapper[5058]: I1014 10:14:10.750583 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-248mp_20b4f849-7c8c-43f7-858a-1db881105d28/kube-rbac-proxy/0.log" Oct 14 10:14:10 crc kubenswrapper[5058]: I1014 10:14:10.840432 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-248mp_20b4f849-7c8c-43f7-858a-1db881105d28/machine-api-operator/0.log" Oct 14 10:14:11 crc kubenswrapper[5058]: I1014 10:14:11.790437 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:14:12 crc kubenswrapper[5058]: I1014 10:14:12.415857 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"8e19c9e45dbb38c109ea709443b436bbc09dd1bac120665b37a84a2889fe1727"} Oct 14 10:14:12 crc kubenswrapper[5058]: I1014 10:14:12.416010 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wj7mj" podUID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerName="registry-server" containerID="cri-o://dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69" gracePeriod=2 Oct 14 10:14:12 crc kubenswrapper[5058]: I1014 10:14:12.916528 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.087278 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzstc\" (UniqueName: \"kubernetes.io/projected/8e105a82-abed-433e-9150-4ef1d8d0ef14-kube-api-access-pzstc\") pod \"8e105a82-abed-433e-9150-4ef1d8d0ef14\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.087364 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-catalog-content\") pod \"8e105a82-abed-433e-9150-4ef1d8d0ef14\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.087389 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-utilities\") pod \"8e105a82-abed-433e-9150-4ef1d8d0ef14\" (UID: \"8e105a82-abed-433e-9150-4ef1d8d0ef14\") " Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.088552 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-utilities" (OuterVolumeSpecName: "utilities") pod "8e105a82-abed-433e-9150-4ef1d8d0ef14" (UID: "8e105a82-abed-433e-9150-4ef1d8d0ef14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.096250 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e105a82-abed-433e-9150-4ef1d8d0ef14-kube-api-access-pzstc" (OuterVolumeSpecName: "kube-api-access-pzstc") pod "8e105a82-abed-433e-9150-4ef1d8d0ef14" (UID: "8e105a82-abed-433e-9150-4ef1d8d0ef14"). InnerVolumeSpecName "kube-api-access-pzstc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.145561 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e105a82-abed-433e-9150-4ef1d8d0ef14" (UID: "8e105a82-abed-433e-9150-4ef1d8d0ef14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.189896 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzstc\" (UniqueName: \"kubernetes.io/projected/8e105a82-abed-433e-9150-4ef1d8d0ef14-kube-api-access-pzstc\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.189935 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.189948 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e105a82-abed-433e-9150-4ef1d8d0ef14-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.427783 5058 generic.go:334] "Generic (PLEG): container finished" podID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerID="dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69" exitCode=0 Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.427843 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj7mj" event={"ID":"8e105a82-abed-433e-9150-4ef1d8d0ef14","Type":"ContainerDied","Data":"dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69"} Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.427870 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj7mj" event={"ID":"8e105a82-abed-433e-9150-4ef1d8d0ef14","Type":"ContainerDied","Data":"8fe175ec3d7eb74d9c4c66787fc8b6989e88764759806e73684f66f2342443d7"} Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.427886 5058 scope.go:117] "RemoveContainer" containerID="dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.427997 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wj7mj" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.466619 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wj7mj"] Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.475277 5058 scope.go:117] "RemoveContainer" containerID="48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.481992 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wj7mj"] Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.520357 5058 scope.go:117] "RemoveContainer" containerID="546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.572727 5058 scope.go:117] "RemoveContainer" containerID="dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69" Oct 14 10:14:13 crc kubenswrapper[5058]: E1014 10:14:13.573146 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69\": container with ID starting with dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69 not found: ID does not exist" containerID="dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.573188 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69"} err="failed to get container status \"dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69\": rpc error: code = NotFound desc = could not find container \"dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69\": container with ID starting with dcbfda4bf64fb7ad8998aea8a733bdb797e5bda6447147e2c593fa5c5fc44c69 not found: ID does not exist" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.573215 5058 scope.go:117] "RemoveContainer" containerID="48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70" Oct 14 10:14:13 crc kubenswrapper[5058]: E1014 10:14:13.573628 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70\": container with ID starting with 48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70 not found: ID does not exist" containerID="48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.573766 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70"} err="failed to get container status \"48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70\": rpc error: code = NotFound desc = could not find container \"48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70\": container with ID starting with 48dc4a33a11b0053bb1e56b5784174f7ede0d768c50fc4bbb8f7e3b7a1dfed70 not found: ID does not exist" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.573983 5058 scope.go:117] "RemoveContainer" containerID="546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4" Oct 14 10:14:13 crc kubenswrapper[5058]: E1014 10:14:13.574307 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4\": container with ID starting with 546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4 not found: ID does not exist" containerID="546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4" Oct 14 10:14:13 crc kubenswrapper[5058]: I1014 10:14:13.574333 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4"} err="failed to get container status \"546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4\": rpc error: code = NotFound desc = could not find container \"546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4\": container with ID starting with 546994ae5199c7a770abfa6b399d92151f61e20913e2abf07d600b87b8cb0bc4 not found: ID does not exist" Oct 14 10:14:14 crc kubenswrapper[5058]: I1014 10:14:14.804638 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e105a82-abed-433e-9150-4ef1d8d0ef14" path="/var/lib/kubelet/pods/8e105a82-abed-433e-9150-4ef1d8d0ef14/volumes" Oct 14 10:14:23 crc kubenswrapper[5058]: I1014 10:14:23.301102 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-7d4cc89fcb-t7w96_fc261d14-654d-4831-b263-57deaff5643c/cert-manager-controller/0.log" Oct 14 10:14:23 crc kubenswrapper[5058]: I1014 10:14:23.436401 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7d9f95dbf-n6c45_18cbb842-b89d-470a-8bcc-aee3eae87148/cert-manager-cainjector/0.log" Oct 14 10:14:23 crc kubenswrapper[5058]: I1014 10:14:23.492461 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-d969966f-8t8vv_82f63d1b-6c1d-40ea-a329-13bc585406f9/cert-manager-webhook/0.log" Oct 14 10:14:35 crc kubenswrapper[5058]: I1014 10:14:35.526394 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-9rgr6_ebb2c83c-e257-49a9-8a93-54a6f218da22/nmstate-console-plugin/0.log" Oct 14 10:14:35 crc kubenswrapper[5058]: I1014 10:14:35.726206 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-cxwdw_5f9ace7e-d57a-4b8c-b9d4-a2cc1a71731b/nmstate-handler/0.log" Oct 14 10:14:35 crc kubenswrapper[5058]: I1014 10:14:35.729305 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-rl5gs_0b1b03ec-3403-4067-91e8-c1255a5e60d9/kube-rbac-proxy/0.log" Oct 14 10:14:35 crc kubenswrapper[5058]: I1014 10:14:35.783125 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-rl5gs_0b1b03ec-3403-4067-91e8-c1255a5e60d9/nmstate-metrics/0.log" Oct 14 10:14:35 crc kubenswrapper[5058]: I1014 10:14:35.944590 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-wq86f_dcecec56-a68a-42e8-b6d0-2898aac52019/nmstate-operator/0.log" Oct 14 10:14:35 crc kubenswrapper[5058]: I1014 10:14:35.992488 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-pnfcp_a8db929e-48b8-4330-9dd8-25e8744d8c79/nmstate-webhook/0.log" Oct 14 10:14:49 crc kubenswrapper[5058]: I1014 10:14:49.628154 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-5l2rj_dbd04243-c4a2-469c-8479-3de6c521fa51/kube-rbac-proxy/0.log" Oct 14 10:14:49 crc kubenswrapper[5058]: I1014 10:14:49.818076 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-frr-files/0.log" Oct 14 10:14:49 crc kubenswrapper[5058]: I1014 10:14:49.993761 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-frr-files/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.048454 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-reloader/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.059007 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-metrics/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.126117 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-5l2rj_dbd04243-c4a2-469c-8479-3de6c521fa51/controller/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.207351 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-reloader/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.310450 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-frr-files/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.337505 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-reloader/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.372022 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-metrics/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.431253 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-metrics/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.559750 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-frr-files/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.564409 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-reloader/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.585467 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/cp-metrics/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.624083 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/controller/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.746253 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/frr-metrics/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.804961 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/kube-rbac-proxy/0.log" Oct 14 10:14:50 crc kubenswrapper[5058]: I1014 10:14:50.883637 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/kube-rbac-proxy-frr/0.log" Oct 14 10:14:51 crc kubenswrapper[5058]: I1014 10:14:51.037303 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/reloader/0.log" Oct 14 10:14:51 crc kubenswrapper[5058]: I1014 10:14:51.102291 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-ggpvr_f5b555e5-444c-4c58-976e-0c041f7e5bba/frr-k8s-webhook-server/0.log" Oct 14 10:14:51 crc kubenswrapper[5058]: I1014 10:14:51.229475 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-59bb547746-xxnjn_4ae5e206-2357-4e79-a812-9a6a6b3be856/manager/0.log" Oct 14 10:14:51 crc kubenswrapper[5058]: I1014 10:14:51.488933 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-bf9c89b58-t7r85_43ee0dfb-b680-485a-b3de-6102c41429f6/webhook-server/0.log" Oct 14 10:14:51 crc kubenswrapper[5058]: I1014 10:14:51.544972 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-79nbt_dbf554ea-ea3c-4407-8f15-6a8c564d81e7/kube-rbac-proxy/0.log" Oct 14 10:14:52 crc kubenswrapper[5058]: I1014 10:14:52.448501 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-79nbt_dbf554ea-ea3c-4407-8f15-6a8c564d81e7/speaker/0.log" Oct 14 10:14:54 crc kubenswrapper[5058]: I1014 10:14:54.493670 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t4dz4_457eef4c-39c2-49a3-a77d-b41938e270de/frr/0.log" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.160132 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc"] Oct 14 10:15:00 crc kubenswrapper[5058]: E1014 10:15:00.161115 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerName="extract-utilities" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.161136 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerName="extract-utilities" Oct 14 10:15:00 crc kubenswrapper[5058]: E1014 10:15:00.161358 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerName="registry-server" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.161366 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerName="registry-server" Oct 14 10:15:00 crc kubenswrapper[5058]: E1014 10:15:00.161409 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerName="extract-content" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.161419 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerName="extract-content" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.161720 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e105a82-abed-433e-9150-4ef1d8d0ef14" containerName="registry-server" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.163046 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.166610 5058 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.166619 5058 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.191269 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc"] Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.350987 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjp2d\" (UniqueName: \"kubernetes.io/projected/66a07d8f-21fb-44be-a157-89a7fd83124b-kube-api-access-rjp2d\") pod \"collect-profiles-29340615-dk9gc\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.351536 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a07d8f-21fb-44be-a157-89a7fd83124b-config-volume\") pod \"collect-profiles-29340615-dk9gc\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.351851 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a07d8f-21fb-44be-a157-89a7fd83124b-secret-volume\") pod \"collect-profiles-29340615-dk9gc\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.454023 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjp2d\" (UniqueName: \"kubernetes.io/projected/66a07d8f-21fb-44be-a157-89a7fd83124b-kube-api-access-rjp2d\") pod \"collect-profiles-29340615-dk9gc\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.454176 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a07d8f-21fb-44be-a157-89a7fd83124b-config-volume\") pod \"collect-profiles-29340615-dk9gc\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.454212 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a07d8f-21fb-44be-a157-89a7fd83124b-secret-volume\") pod \"collect-profiles-29340615-dk9gc\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.455325 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a07d8f-21fb-44be-a157-89a7fd83124b-config-volume\") pod \"collect-profiles-29340615-dk9gc\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.466941 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a07d8f-21fb-44be-a157-89a7fd83124b-secret-volume\") pod \"collect-profiles-29340615-dk9gc\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.472590 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjp2d\" (UniqueName: \"kubernetes.io/projected/66a07d8f-21fb-44be-a157-89a7fd83124b-kube-api-access-rjp2d\") pod \"collect-profiles-29340615-dk9gc\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.488069 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:00 crc kubenswrapper[5058]: I1014 10:15:00.991961 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc"] Oct 14 10:15:01 crc kubenswrapper[5058]: I1014 10:15:01.958530 5058 generic.go:334] "Generic (PLEG): container finished" podID="66a07d8f-21fb-44be-a157-89a7fd83124b" containerID="7ebfa91caa91025e4d0392f74cd02c9406e5aa40b966cc902c1044c93e47e571" exitCode=0 Oct 14 10:15:01 crc kubenswrapper[5058]: I1014 10:15:01.958742 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" event={"ID":"66a07d8f-21fb-44be-a157-89a7fd83124b","Type":"ContainerDied","Data":"7ebfa91caa91025e4d0392f74cd02c9406e5aa40b966cc902c1044c93e47e571"} Oct 14 10:15:01 crc kubenswrapper[5058]: I1014 10:15:01.959067 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" event={"ID":"66a07d8f-21fb-44be-a157-89a7fd83124b","Type":"ContainerStarted","Data":"877c5208e58e26c57faa31dd809a07abee8da13514026e247874cc7ca2e2ee40"} Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.372458 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.515329 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjp2d\" (UniqueName: \"kubernetes.io/projected/66a07d8f-21fb-44be-a157-89a7fd83124b-kube-api-access-rjp2d\") pod \"66a07d8f-21fb-44be-a157-89a7fd83124b\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.515707 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a07d8f-21fb-44be-a157-89a7fd83124b-config-volume\") pod \"66a07d8f-21fb-44be-a157-89a7fd83124b\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.515749 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a07d8f-21fb-44be-a157-89a7fd83124b-secret-volume\") pod \"66a07d8f-21fb-44be-a157-89a7fd83124b\" (UID: \"66a07d8f-21fb-44be-a157-89a7fd83124b\") " Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.516918 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a07d8f-21fb-44be-a157-89a7fd83124b-config-volume" (OuterVolumeSpecName: "config-volume") pod "66a07d8f-21fb-44be-a157-89a7fd83124b" (UID: "66a07d8f-21fb-44be-a157-89a7fd83124b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.520500 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a07d8f-21fb-44be-a157-89a7fd83124b-kube-api-access-rjp2d" (OuterVolumeSpecName: "kube-api-access-rjp2d") pod "66a07d8f-21fb-44be-a157-89a7fd83124b" (UID: "66a07d8f-21fb-44be-a157-89a7fd83124b"). InnerVolumeSpecName "kube-api-access-rjp2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.523053 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a07d8f-21fb-44be-a157-89a7fd83124b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66a07d8f-21fb-44be-a157-89a7fd83124b" (UID: "66a07d8f-21fb-44be-a157-89a7fd83124b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.618398 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjp2d\" (UniqueName: \"kubernetes.io/projected/66a07d8f-21fb-44be-a157-89a7fd83124b-kube-api-access-rjp2d\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.618444 5058 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a07d8f-21fb-44be-a157-89a7fd83124b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.618461 5058 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a07d8f-21fb-44be-a157-89a7fd83124b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.978649 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" event={"ID":"66a07d8f-21fb-44be-a157-89a7fd83124b","Type":"ContainerDied","Data":"877c5208e58e26c57faa31dd809a07abee8da13514026e247874cc7ca2e2ee40"} Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.978691 5058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="877c5208e58e26c57faa31dd809a07abee8da13514026e247874cc7ca2e2ee40" Oct 14 10:15:03 crc kubenswrapper[5058]: I1014 10:15:03.978744 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340615-dk9gc" Oct 14 10:15:04 crc kubenswrapper[5058]: I1014 10:15:04.447828 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp"] Oct 14 10:15:04 crc kubenswrapper[5058]: I1014 10:15:04.461994 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340570-z8rdp"] Oct 14 10:15:04 crc kubenswrapper[5058]: I1014 10:15:04.806123 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7066df10-e0b6-43a3-ad6e-35614021029d" path="/var/lib/kubelet/pods/7066df10-e0b6-43a3-ad6e-35614021029d/volumes" Oct 14 10:15:04 crc kubenswrapper[5058]: I1014 10:15:04.825268 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85_419dbddd-983d-4eee-8e5a-9ad9ee3de5b2/util/0.log" Oct 14 10:15:04 crc kubenswrapper[5058]: I1014 10:15:04.968385 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85_419dbddd-983d-4eee-8e5a-9ad9ee3de5b2/util/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.038144 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85_419dbddd-983d-4eee-8e5a-9ad9ee3de5b2/pull/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.066649 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85_419dbddd-983d-4eee-8e5a-9ad9ee3de5b2/pull/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.315407 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85_419dbddd-983d-4eee-8e5a-9ad9ee3de5b2/extract/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.316322 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85_419dbddd-983d-4eee-8e5a-9ad9ee3de5b2/pull/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.382975 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69wrb85_419dbddd-983d-4eee-8e5a-9ad9ee3de5b2/util/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.521690 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k_ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e/util/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.690572 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k_ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e/util/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.692003 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k_ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e/pull/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.704755 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k_ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e/pull/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.912460 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k_ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e/util/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.935533 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k_ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e/pull/0.log" Oct 14 10:15:05 crc kubenswrapper[5058]: I1014 10:15:05.940985 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d28rc9k_ee02c1d0-f6e3-48ec-ad2d-e0fab5f1bb5e/extract/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.122626 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx_9d91c6c2-14e3-4d18-b53b-55dd5000d1ad/util/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.283000 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx_9d91c6c2-14e3-4d18-b53b-55dd5000d1ad/pull/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.308122 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx_9d91c6c2-14e3-4d18-b53b-55dd5000d1ad/pull/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.313718 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx_9d91c6c2-14e3-4d18-b53b-55dd5000d1ad/util/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.465593 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx_9d91c6c2-14e3-4d18-b53b-55dd5000d1ad/pull/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.469483 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx_9d91c6c2-14e3-4d18-b53b-55dd5000d1ad/extract/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.500725 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8hqtx_9d91c6c2-14e3-4d18-b53b-55dd5000d1ad/util/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.621447 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7m7z_04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c/extract-utilities/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.833583 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7m7z_04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c/extract-content/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.837469 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7m7z_04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c/extract-content/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.852881 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7m7z_04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c/extract-utilities/0.log" Oct 14 10:15:06 crc kubenswrapper[5058]: I1014 10:15:06.976772 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7m7z_04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c/extract-utilities/0.log" Oct 14 10:15:07 crc kubenswrapper[5058]: I1014 10:15:07.077722 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7m7z_04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c/extract-content/0.log" Oct 14 10:15:07 crc kubenswrapper[5058]: I1014 10:15:07.191522 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-225sw_c8b5d51c-ae71-4b85-bb3f-768c509cb1b1/extract-utilities/0.log" Oct 14 10:15:07 crc kubenswrapper[5058]: I1014 10:15:07.490128 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-225sw_c8b5d51c-ae71-4b85-bb3f-768c509cb1b1/extract-content/0.log" Oct 14 10:15:07 crc kubenswrapper[5058]: I1014 10:15:07.511866 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-225sw_c8b5d51c-ae71-4b85-bb3f-768c509cb1b1/extract-content/0.log" Oct 14 10:15:07 crc kubenswrapper[5058]: I1014 10:15:07.574583 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-225sw_c8b5d51c-ae71-4b85-bb3f-768c509cb1b1/extract-utilities/0.log" Oct 14 10:15:07 crc kubenswrapper[5058]: I1014 10:15:07.776389 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-225sw_c8b5d51c-ae71-4b85-bb3f-768c509cb1b1/extract-utilities/0.log" Oct 14 10:15:07 crc kubenswrapper[5058]: I1014 10:15:07.845216 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-225sw_c8b5d51c-ae71-4b85-bb3f-768c509cb1b1/extract-content/0.log" Oct 14 10:15:08 crc kubenswrapper[5058]: I1014 10:15:08.113690 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw_ba6d3dfa-60c7-482a-8452-c3cafc29897e/util/0.log" Oct 14 10:15:08 crc kubenswrapper[5058]: I1014 10:15:08.329470 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw_ba6d3dfa-60c7-482a-8452-c3cafc29897e/util/0.log" Oct 14 10:15:08 crc kubenswrapper[5058]: I1014 10:15:08.332203 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw_ba6d3dfa-60c7-482a-8452-c3cafc29897e/pull/0.log" Oct 14 10:15:08 crc kubenswrapper[5058]: I1014 10:15:08.351248 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw_ba6d3dfa-60c7-482a-8452-c3cafc29897e/pull/0.log" Oct 14 10:15:08 crc kubenswrapper[5058]: I1014 10:15:08.522452 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw_ba6d3dfa-60c7-482a-8452-c3cafc29897e/util/0.log" Oct 14 10:15:08 crc kubenswrapper[5058]: I1014 10:15:08.555755 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw_ba6d3dfa-60c7-482a-8452-c3cafc29897e/pull/0.log" Oct 14 10:15:08 crc kubenswrapper[5058]: I1014 10:15:08.572111 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cxdfpw_ba6d3dfa-60c7-482a-8452-c3cafc29897e/extract/0.log" Oct 14 10:15:08 crc kubenswrapper[5058]: I1014 10:15:08.815216 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-plv9m_b2e15c09-cc07-43fe-b046-d0c5406a82e0/marketplace-operator/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.084954 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmx7f_ef40d6fd-414a-40e4-b121-d1b18ad98e6e/extract-utilities/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.173865 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7m7z_04b1a95a-8628-4c1c-83d9-3d4ae2bafc3c/registry-server/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.196652 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmx7f_ef40d6fd-414a-40e4-b121-d1b18ad98e6e/extract-utilities/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.205859 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmx7f_ef40d6fd-414a-40e4-b121-d1b18ad98e6e/extract-content/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.320410 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmx7f_ef40d6fd-414a-40e4-b121-d1b18ad98e6e/extract-content/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.501947 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmx7f_ef40d6fd-414a-40e4-b121-d1b18ad98e6e/extract-content/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.523731 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmx7f_ef40d6fd-414a-40e4-b121-d1b18ad98e6e/extract-utilities/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.697378 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fhw7t_4719d3bd-e085-4913-ba43-7b928d57b181/extract-utilities/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.719032 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-225sw_c8b5d51c-ae71-4b85-bb3f-768c509cb1b1/registry-server/0.log" Oct 14 10:15:09 crc kubenswrapper[5058]: I1014 10:15:09.956360 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fhw7t_4719d3bd-e085-4913-ba43-7b928d57b181/extract-content/0.log" Oct 14 10:15:10 crc kubenswrapper[5058]: I1014 10:15:10.005118 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fhw7t_4719d3bd-e085-4913-ba43-7b928d57b181/extract-utilities/0.log" Oct 14 10:15:10 crc kubenswrapper[5058]: I1014 10:15:10.058879 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fhw7t_4719d3bd-e085-4913-ba43-7b928d57b181/extract-content/0.log" Oct 14 10:15:10 crc kubenswrapper[5058]: I1014 10:15:10.060225 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zmx7f_ef40d6fd-414a-40e4-b121-d1b18ad98e6e/registry-server/0.log" Oct 14 10:15:10 crc kubenswrapper[5058]: I1014 10:15:10.169900 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fhw7t_4719d3bd-e085-4913-ba43-7b928d57b181/extract-content/0.log" Oct 14 10:15:10 crc kubenswrapper[5058]: I1014 10:15:10.175862 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fhw7t_4719d3bd-e085-4913-ba43-7b928d57b181/extract-utilities/0.log" Oct 14 10:15:11 crc kubenswrapper[5058]: I1014 10:15:11.454256 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fhw7t_4719d3bd-e085-4913-ba43-7b928d57b181/registry-server/0.log" Oct 14 10:15:12 crc kubenswrapper[5058]: I1014 10:15:12.052728 5058 scope.go:117] "RemoveContainer" containerID="a8d0ad54687ffb2212b31763b5902cbbb9fa64b5c170ada9feac2b464d40c3e6" Oct 14 10:15:22 crc kubenswrapper[5058]: I1014 10:15:22.050378 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-7c8cf85677-2php4_4ce4b8f9-59ef-47d1-bdeb-bca99787f5a3/prometheus-operator/0.log" Oct 14 10:15:22 crc kubenswrapper[5058]: I1014 10:15:22.172300 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f4c96b994-8bqp4_96bdd361-3872-4153-a2c0-53fda217a865/prometheus-operator-admission-webhook/0.log" Oct 14 10:15:22 crc kubenswrapper[5058]: I1014 10:15:22.237520 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f4c96b994-jf2pf_641c8e0a-5556-4b93-b7ae-9a6e428509fd/prometheus-operator-admission-webhook/0.log" Oct 14 10:15:22 crc kubenswrapper[5058]: I1014 10:15:22.398495 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-cc5f78dfc-f825p_fd3e9445-621b-4747-8bb6-25e5c8587b1e/operator/0.log" Oct 14 10:15:22 crc kubenswrapper[5058]: I1014 10:15:22.427579 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-54bc95c9fb-4zhxb_863bdd17-8f39-44bc-84d6-2a6d80b685c7/perses-operator/0.log" Oct 14 10:15:44 crc kubenswrapper[5058]: E1014 10:15:44.276605 5058 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.169:36306->38.102.83.169:42833: write tcp 38.102.83.169:36306->38.102.83.169:42833: write: broken pipe Oct 14 10:15:45 crc kubenswrapper[5058]: E1014 10:15:45.172035 5058 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.169:36124->38.102.83.169:42833: write tcp 38.102.83.169:36124->38.102.83.169:42833: write: broken pipe Oct 14 10:16:33 crc kubenswrapper[5058]: I1014 10:16:33.655920 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:16:33 crc kubenswrapper[5058]: I1014 10:16:33.656608 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:17:03 crc kubenswrapper[5058]: I1014 10:17:03.655859 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:17:03 crc kubenswrapper[5058]: I1014 10:17:03.656707 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.269154 5058 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cmcpg"] Oct 14 10:17:14 crc kubenswrapper[5058]: E1014 10:17:14.270393 5058 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a07d8f-21fb-44be-a157-89a7fd83124b" containerName="collect-profiles" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.270407 5058 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a07d8f-21fb-44be-a157-89a7fd83124b" containerName="collect-profiles" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.270650 5058 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a07d8f-21fb-44be-a157-89a7fd83124b" containerName="collect-profiles" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.273891 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.308036 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmcpg"] Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.399641 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b221e5-0c2d-4077-8d0d-70c778b46db0-utilities\") pod \"redhat-operators-cmcpg\" (UID: \"97b221e5-0c2d-4077-8d0d-70c778b46db0\") " pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.399688 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f55ps\" (UniqueName: \"kubernetes.io/projected/97b221e5-0c2d-4077-8d0d-70c778b46db0-kube-api-access-f55ps\") pod \"redhat-operators-cmcpg\" (UID: \"97b221e5-0c2d-4077-8d0d-70c778b46db0\") " pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.399808 5058 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b221e5-0c2d-4077-8d0d-70c778b46db0-catalog-content\") pod \"redhat-operators-cmcpg\" (UID: \"97b221e5-0c2d-4077-8d0d-70c778b46db0\") " pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.501910 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b221e5-0c2d-4077-8d0d-70c778b46db0-utilities\") pod \"redhat-operators-cmcpg\" (UID: \"97b221e5-0c2d-4077-8d0d-70c778b46db0\") " pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.501958 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f55ps\" (UniqueName: \"kubernetes.io/projected/97b221e5-0c2d-4077-8d0d-70c778b46db0-kube-api-access-f55ps\") pod \"redhat-operators-cmcpg\" (UID: \"97b221e5-0c2d-4077-8d0d-70c778b46db0\") " pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.502046 5058 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b221e5-0c2d-4077-8d0d-70c778b46db0-catalog-content\") pod \"redhat-operators-cmcpg\" (UID: \"97b221e5-0c2d-4077-8d0d-70c778b46db0\") " pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.502533 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b221e5-0c2d-4077-8d0d-70c778b46db0-utilities\") pod \"redhat-operators-cmcpg\" (UID: \"97b221e5-0c2d-4077-8d0d-70c778b46db0\") " pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.502592 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b221e5-0c2d-4077-8d0d-70c778b46db0-catalog-content\") pod \"redhat-operators-cmcpg\" (UID: \"97b221e5-0c2d-4077-8d0d-70c778b46db0\") " pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.534263 5058 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f55ps\" (UniqueName: \"kubernetes.io/projected/97b221e5-0c2d-4077-8d0d-70c778b46db0-kube-api-access-f55ps\") pod \"redhat-operators-cmcpg\" (UID: \"97b221e5-0c2d-4077-8d0d-70c778b46db0\") " pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:14 crc kubenswrapper[5058]: I1014 10:17:14.605243 5058 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:15 crc kubenswrapper[5058]: I1014 10:17:15.141283 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmcpg"] Oct 14 10:17:15 crc kubenswrapper[5058]: I1014 10:17:15.617895 5058 generic.go:334] "Generic (PLEG): container finished" podID="97b221e5-0c2d-4077-8d0d-70c778b46db0" containerID="7f222429769e0c901e62aa030b143d760a706a6ddaef959f600fabef6f384ad9" exitCode=0 Oct 14 10:17:15 crc kubenswrapper[5058]: I1014 10:17:15.618107 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmcpg" event={"ID":"97b221e5-0c2d-4077-8d0d-70c778b46db0","Type":"ContainerDied","Data":"7f222429769e0c901e62aa030b143d760a706a6ddaef959f600fabef6f384ad9"} Oct 14 10:17:15 crc kubenswrapper[5058]: I1014 10:17:15.618171 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmcpg" event={"ID":"97b221e5-0c2d-4077-8d0d-70c778b46db0","Type":"ContainerStarted","Data":"74d5edddbb385b6a1a5df6153d079945e2ebfb29c16591d3ad1488f7abe17283"} Oct 14 10:17:26 crc kubenswrapper[5058]: I1014 10:17:26.757889 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmcpg" event={"ID":"97b221e5-0c2d-4077-8d0d-70c778b46db0","Type":"ContainerStarted","Data":"af9fd91b0ecad490ca8bbf0bf233fefcab29b083b48fcb6fc9fcf966c9736c48"} Oct 14 10:17:27 crc kubenswrapper[5058]: I1014 10:17:27.774660 5058 generic.go:334] "Generic (PLEG): container finished" podID="97b221e5-0c2d-4077-8d0d-70c778b46db0" containerID="af9fd91b0ecad490ca8bbf0bf233fefcab29b083b48fcb6fc9fcf966c9736c48" exitCode=0 Oct 14 10:17:27 crc kubenswrapper[5058]: I1014 10:17:27.775101 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmcpg" event={"ID":"97b221e5-0c2d-4077-8d0d-70c778b46db0","Type":"ContainerDied","Data":"af9fd91b0ecad490ca8bbf0bf233fefcab29b083b48fcb6fc9fcf966c9736c48"} Oct 14 10:17:29 crc kubenswrapper[5058]: I1014 10:17:29.819700 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmcpg" event={"ID":"97b221e5-0c2d-4077-8d0d-70c778b46db0","Type":"ContainerStarted","Data":"5dcd824c5ff50af10535fa277e68f04c7e4ea4c67c6525b3df91769104755898"} Oct 14 10:17:29 crc kubenswrapper[5058]: I1014 10:17:29.844401 5058 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cmcpg" podStartSLOduration=2.506702904 podStartE2EDuration="15.844375381s" podCreationTimestamp="2025-10-14 10:17:14 +0000 UTC" firstStartedPulling="2025-10-14 10:17:15.620344085 +0000 UTC m=+12583.531427891" lastFinishedPulling="2025-10-14 10:17:28.958016562 +0000 UTC m=+12596.869100368" observedRunningTime="2025-10-14 10:17:29.838852284 +0000 UTC m=+12597.749936130" watchObservedRunningTime="2025-10-14 10:17:29.844375381 +0000 UTC m=+12597.755459227" Oct 14 10:17:30 crc kubenswrapper[5058]: I1014 10:17:30.836906 5058 generic.go:334] "Generic (PLEG): container finished" podID="85236c25-f74f-4600-afa9-09431b234bc5" containerID="3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5" exitCode=0 Oct 14 10:17:30 crc kubenswrapper[5058]: I1014 10:17:30.837960 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" event={"ID":"85236c25-f74f-4600-afa9-09431b234bc5","Type":"ContainerDied","Data":"3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5"} Oct 14 10:17:30 crc kubenswrapper[5058]: I1014 10:17:30.838438 5058 scope.go:117] "RemoveContainer" containerID="3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5" Oct 14 10:17:31 crc kubenswrapper[5058]: I1014 10:17:31.583493 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jb8bt_must-gather-kt7kp_85236c25-f74f-4600-afa9-09431b234bc5/gather/0.log" Oct 14 10:17:33 crc kubenswrapper[5058]: I1014 10:17:33.656151 5058 patch_prober.go:28] interesting pod/machine-config-daemon-q5fhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 10:17:33 crc kubenswrapper[5058]: I1014 10:17:33.656454 5058 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 10:17:33 crc kubenswrapper[5058]: I1014 10:17:33.656498 5058 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" Oct 14 10:17:33 crc kubenswrapper[5058]: I1014 10:17:33.657086 5058 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e19c9e45dbb38c109ea709443b436bbc09dd1bac120665b37a84a2889fe1727"} pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 10:17:33 crc kubenswrapper[5058]: I1014 10:17:33.657340 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" podUID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerName="machine-config-daemon" containerID="cri-o://8e19c9e45dbb38c109ea709443b436bbc09dd1bac120665b37a84a2889fe1727" gracePeriod=600 Oct 14 10:17:33 crc kubenswrapper[5058]: I1014 10:17:33.883580 5058 generic.go:334] "Generic (PLEG): container finished" podID="64184db4-5b6d-4aa8-b780-c9f6163af3d8" containerID="8e19c9e45dbb38c109ea709443b436bbc09dd1bac120665b37a84a2889fe1727" exitCode=0 Oct 14 10:17:33 crc kubenswrapper[5058]: I1014 10:17:33.883647 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerDied","Data":"8e19c9e45dbb38c109ea709443b436bbc09dd1bac120665b37a84a2889fe1727"} Oct 14 10:17:33 crc kubenswrapper[5058]: I1014 10:17:33.883724 5058 scope.go:117] "RemoveContainer" containerID="e8a1567a87c1e947db5b42f48fe0a1e6ec07cf658ba234774fd86ca4bafc3461" Oct 14 10:17:34 crc kubenswrapper[5058]: I1014 10:17:34.605640 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:34 crc kubenswrapper[5058]: I1014 10:17:34.606128 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:34 crc kubenswrapper[5058]: I1014 10:17:34.897455 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q5fhs" event={"ID":"64184db4-5b6d-4aa8-b780-c9f6163af3d8","Type":"ContainerStarted","Data":"19480a237e5039d6d61bf5a401e16d045be3dd3af0107ac65c391b4bde2a691c"} Oct 14 10:17:35 crc kubenswrapper[5058]: I1014 10:17:35.668060 5058 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cmcpg" podUID="97b221e5-0c2d-4077-8d0d-70c778b46db0" containerName="registry-server" probeResult="failure" output=< Oct 14 10:17:35 crc kubenswrapper[5058]: timeout: failed to connect service ":50051" within 1s Oct 14 10:17:35 crc kubenswrapper[5058]: > Oct 14 10:17:41 crc kubenswrapper[5058]: I1014 10:17:41.859081 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jb8bt/must-gather-kt7kp"] Oct 14 10:17:41 crc kubenswrapper[5058]: I1014 10:17:41.860067 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" podUID="85236c25-f74f-4600-afa9-09431b234bc5" containerName="copy" containerID="cri-o://047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d" gracePeriod=2 Oct 14 10:17:41 crc kubenswrapper[5058]: I1014 10:17:41.872578 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jb8bt/must-gather-kt7kp"] Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.371919 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jb8bt_must-gather-kt7kp_85236c25-f74f-4600-afa9-09431b234bc5/copy/0.log" Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.372514 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.493073 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85236c25-f74f-4600-afa9-09431b234bc5-must-gather-output\") pod \"85236c25-f74f-4600-afa9-09431b234bc5\" (UID: \"85236c25-f74f-4600-afa9-09431b234bc5\") " Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.493208 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx4fh\" (UniqueName: \"kubernetes.io/projected/85236c25-f74f-4600-afa9-09431b234bc5-kube-api-access-sx4fh\") pod \"85236c25-f74f-4600-afa9-09431b234bc5\" (UID: \"85236c25-f74f-4600-afa9-09431b234bc5\") " Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.501822 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85236c25-f74f-4600-afa9-09431b234bc5-kube-api-access-sx4fh" (OuterVolumeSpecName: "kube-api-access-sx4fh") pod "85236c25-f74f-4600-afa9-09431b234bc5" (UID: "85236c25-f74f-4600-afa9-09431b234bc5"). InnerVolumeSpecName "kube-api-access-sx4fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.596299 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx4fh\" (UniqueName: \"kubernetes.io/projected/85236c25-f74f-4600-afa9-09431b234bc5-kube-api-access-sx4fh\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.676924 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85236c25-f74f-4600-afa9-09431b234bc5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "85236c25-f74f-4600-afa9-09431b234bc5" (UID: "85236c25-f74f-4600-afa9-09431b234bc5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.698326 5058 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85236c25-f74f-4600-afa9-09431b234bc5-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.806126 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85236c25-f74f-4600-afa9-09431b234bc5" path="/var/lib/kubelet/pods/85236c25-f74f-4600-afa9-09431b234bc5/volumes" Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.990243 5058 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jb8bt_must-gather-kt7kp_85236c25-f74f-4600-afa9-09431b234bc5/copy/0.log" Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.990918 5058 generic.go:334] "Generic (PLEG): container finished" podID="85236c25-f74f-4600-afa9-09431b234bc5" containerID="047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d" exitCode=143 Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.990954 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jb8bt/must-gather-kt7kp" Oct 14 10:17:42 crc kubenswrapper[5058]: I1014 10:17:42.990988 5058 scope.go:117] "RemoveContainer" containerID="047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d" Oct 14 10:17:43 crc kubenswrapper[5058]: I1014 10:17:43.023064 5058 scope.go:117] "RemoveContainer" containerID="3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5" Oct 14 10:17:43 crc kubenswrapper[5058]: I1014 10:17:43.083249 5058 scope.go:117] "RemoveContainer" containerID="047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d" Oct 14 10:17:43 crc kubenswrapper[5058]: E1014 10:17:43.083929 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d\": container with ID starting with 047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d not found: ID does not exist" containerID="047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d" Oct 14 10:17:43 crc kubenswrapper[5058]: I1014 10:17:43.084015 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d"} err="failed to get container status \"047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d\": rpc error: code = NotFound desc = could not find container \"047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d\": container with ID starting with 047f6e08f642fde8dc3554b3a84fb7162873473c3e7b8668b89b99775ff17c6d not found: ID does not exist" Oct 14 10:17:43 crc kubenswrapper[5058]: I1014 10:17:43.084069 5058 scope.go:117] "RemoveContainer" containerID="3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5" Oct 14 10:17:43 crc kubenswrapper[5058]: E1014 10:17:43.084535 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5\": container with ID starting with 3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5 not found: ID does not exist" containerID="3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5" Oct 14 10:17:43 crc kubenswrapper[5058]: I1014 10:17:43.084567 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5"} err="failed to get container status \"3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5\": rpc error: code = NotFound desc = could not find container \"3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5\": container with ID starting with 3c3ade6758d0ba6e569ad820002e8185fcd05fdc3756e93a564fa90b462fcaf5 not found: ID does not exist" Oct 14 10:17:44 crc kubenswrapper[5058]: I1014 10:17:44.646161 5058 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:44 crc kubenswrapper[5058]: I1014 10:17:44.700465 5058 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cmcpg" Oct 14 10:17:45 crc kubenswrapper[5058]: I1014 10:17:45.316177 5058 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmcpg"] Oct 14 10:17:45 crc kubenswrapper[5058]: I1014 10:17:45.470708 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhw7t"] Oct 14 10:17:45 crc kubenswrapper[5058]: I1014 10:17:45.471007 5058 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fhw7t" podUID="4719d3bd-e085-4913-ba43-7b928d57b181" containerName="registry-server" containerID="cri-o://1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca" gracePeriod=2 Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.017615 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.023504 5058 generic.go:334] "Generic (PLEG): container finished" podID="4719d3bd-e085-4913-ba43-7b928d57b181" containerID="1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca" exitCode=0 Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.023560 5058 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhw7t" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.023599 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhw7t" event={"ID":"4719d3bd-e085-4913-ba43-7b928d57b181","Type":"ContainerDied","Data":"1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca"} Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.023654 5058 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhw7t" event={"ID":"4719d3bd-e085-4913-ba43-7b928d57b181","Type":"ContainerDied","Data":"6c03e3f99b803f6bc9462ecae2aa37d67d85f4e20368d9708fa9d7ee9f3af5db"} Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.023677 5058 scope.go:117] "RemoveContainer" containerID="1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.050308 5058 scope.go:117] "RemoveContainer" containerID="ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.079495 5058 scope.go:117] "RemoveContainer" containerID="2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.123593 5058 scope.go:117] "RemoveContainer" containerID="1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca" Oct 14 10:17:46 crc kubenswrapper[5058]: E1014 10:17:46.124031 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca\": container with ID starting with 1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca not found: ID does not exist" containerID="1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.124079 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca"} err="failed to get container status \"1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca\": rpc error: code = NotFound desc = could not find container \"1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca\": container with ID starting with 1f91ab513038fe70ab32b6494c5fa134cc2cda749b4419f7db2b0ece8a1450ca not found: ID does not exist" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.124107 5058 scope.go:117] "RemoveContainer" containerID="ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6" Oct 14 10:17:46 crc kubenswrapper[5058]: E1014 10:17:46.124572 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6\": container with ID starting with ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6 not found: ID does not exist" containerID="ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.124612 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6"} err="failed to get container status \"ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6\": rpc error: code = NotFound desc = could not find container \"ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6\": container with ID starting with ee66c1bc144dceb5590d074401f05ed26a953e47d7d5a8009e3a926a2d4f4cb6 not found: ID does not exist" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.124634 5058 scope.go:117] "RemoveContainer" containerID="2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64" Oct 14 10:17:46 crc kubenswrapper[5058]: E1014 10:17:46.125049 5058 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64\": container with ID starting with 2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64 not found: ID does not exist" containerID="2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.125095 5058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64"} err="failed to get container status \"2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64\": rpc error: code = NotFound desc = could not find container \"2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64\": container with ID starting with 2d054732d04ac9710b8574461fbfd7249747bf9461ef5289a947d4e6f7cc7b64 not found: ID does not exist" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.173492 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-catalog-content\") pod \"4719d3bd-e085-4913-ba43-7b928d57b181\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.173689 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp4z8\" (UniqueName: \"kubernetes.io/projected/4719d3bd-e085-4913-ba43-7b928d57b181-kube-api-access-rp4z8\") pod \"4719d3bd-e085-4913-ba43-7b928d57b181\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.173757 5058 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-utilities\") pod \"4719d3bd-e085-4913-ba43-7b928d57b181\" (UID: \"4719d3bd-e085-4913-ba43-7b928d57b181\") " Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.174321 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-utilities" (OuterVolumeSpecName: "utilities") pod "4719d3bd-e085-4913-ba43-7b928d57b181" (UID: "4719d3bd-e085-4913-ba43-7b928d57b181"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.174594 5058 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.180049 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4719d3bd-e085-4913-ba43-7b928d57b181-kube-api-access-rp4z8" (OuterVolumeSpecName: "kube-api-access-rp4z8") pod "4719d3bd-e085-4913-ba43-7b928d57b181" (UID: "4719d3bd-e085-4913-ba43-7b928d57b181"). InnerVolumeSpecName "kube-api-access-rp4z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.259387 5058 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4719d3bd-e085-4913-ba43-7b928d57b181" (UID: "4719d3bd-e085-4913-ba43-7b928d57b181"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.276170 5058 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp4z8\" (UniqueName: \"kubernetes.io/projected/4719d3bd-e085-4913-ba43-7b928d57b181-kube-api-access-rp4z8\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.276211 5058 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4719d3bd-e085-4913-ba43-7b928d57b181-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.357723 5058 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhw7t"] Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.367596 5058 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fhw7t"] Oct 14 10:17:46 crc kubenswrapper[5058]: I1014 10:17:46.803030 5058 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4719d3bd-e085-4913-ba43-7b928d57b181" path="/var/lib/kubelet/pods/4719d3bd-e085-4913-ba43-7b928d57b181/volumes"